215 96 4MB
English Pages XIX, 329 [346] Year 2020
Applied Mathematical Sciences
Igor Chueshov Björn Schmalfuß
Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems
Applied Mathematical Sciences Volume 204
Series Editors Anthony Bloch, Department of Mathematics, University of Michigan, Ann Arbor, MI, USA C. L. Epstein, Department of Mathematics, University of Pennsylvania, Philadelphia, PA, USA Alain Goriely, Department of Mathematics, University of Oxford, Oxford, UK Leslie Greengard, New York University, New York, NY, USA Advisory Editors J. Bell, Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA, USA P. Constantin, Department of Mathematics, Princeton University, Princeton, NJ, USA R. Durrett, Department of Mathematics, Duke University, Durham, CA, USA R. Kohn, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA R. Pego, Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA, USA L. Ryzhik, Department of Mathematics, Stanford University, Stanford, CA, USA A. Singer, Department of Mathematics, Princeton University, Princeton, NJ, USA A. Stevens, Department of Applied Mathematics, University of M¨unster, M¨unster, Germany S. Wright, Computer Sciences Department, University of Wisconsin, Madison, WI, USA Founding Editors F. John, New York University, New York, NY, USA J.P. LaSalle, Brown University, Providence, RI, USA L. Sirovich, Brown University, Providence, RI, USA
The mathematization of all sciences, the fading of traditional scientific boundaries, the impact of computer technology, the growing importance of computer modeling and the necessity of scientific planning all create the need both in education and research for books that are introductory to and abreast of these developments. The purpose of this series is to provide such books, suitable for the user of mathematics, the mathematician interested in applications, and the student scientist. In particular, this series will provide an outlet for topics of immediate interest because of the novelty of its treatment of an application or of mathematics being applied or lying close to applications. These books should be accessible to readers versed in mathematics or science and engineering, and will feature a lively tutorial style, a focus on topics of current interest, and present clear exposition of broad appeal. A compliment to the Applied Mathematical Sciences series is the Texts in Applied Mathematics series, which publishes textbooks suitable for advanced undergraduate and beginning graduate courses.
More information about this series at http://www.springer.com/series/34
Igor Chueshov • Bj¨orn Schmalfuß
Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems
Igor Chueshov (deceased)
Bj¨orn Schmalfuß Department of Mathematics and Informatics Friedrich-Schiller-University Jena, Germany
ISSN 0066-5452 ISSN 2196-968X (electronic) Applied Mathematical Sciences ISBN 978-3-030-47090-6 ISBN 978-3-030-47091-3 (eBook) https://doi.org/10.1007/978-3-030-47091-3 Mathematics Subject Classification: 37L15, 37L25, 37L55, 34D06, 60H15 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
One of the first researchers who to study synchronization was Christiaan Huygens in the seventeenth century, when he noticed that two pendulum clocks mounted on the same frame would eventually synchronize and adjust their rhythms (see the discussion in the book by Bennett et al. [12]). Nowadays, the synchronization of coupled systems is an ubiquitous phenomenon that is detected in the biological and physical sciences and is also known to occur, even in a number of social science contexts. A descriptive account of diversity of occurrence can be found in the book by Strogatz [159], which contains an extensive list of references. In particular, synchronization provides an explanation for the emergence of spontaneous order in the dynamical behavior of coupled systems, which in isolation may exhibit chaotic dynamics. However, most of the sources are devoted to finite-dimensional interacting systems, see, for example, the monographs by Balanov et al. [8], Leonov/Smirnova [109], Mosekilde/Maistrenko/Postnov [118], Osipov/Kurths/Zhou [126], Pikovsky/Rosenblum/Kurths [130], Wu [168], and the references therein. In the case of infinitedimensional systems, the synchronization problem has not been studied as intensively. There are only a few papers on the subject (see Carvalho/Primo [24], Carvalho/Rodrigues/Dlotko [25], Rodrigues [140] for the case of coupled (deterministic) parabolic systems, Chueshov [41], Naboka [121–123] for some plate models, Chueshov [38, 39] for spectrally separated partial differential equation (PDE) systems, and also Caraballo/Chueshov/Kloeden [20], Chueshov/Kloeden/Yang [59], Chueshov/Schmalfuss [54] for stochastic PDEs). The main goal of this book is to present in a systematic way the mathematical methods that are applied in the study of synchronization of infinite-dimensional evolutionary dissipative or partially dissipative systems. Our presentation is based on general and abstract models and covers several important classes of coupled nonlinear deterministic and stochastic PDEs, which generate infinite-dimensional dissipative systems. These classes include systems consisting of (i) parabolic and hyperbolic equations, (ii) two hyperbolic equations, and (iii) Klein–Gordon and Schr¨odinger equations. Various classes of reaction–diffusion models and interacting elastic/wave structures are also included in the list of possible applications. v
vi
Preface
We hope that this book will be useful not only to mathematicians interested in the general theoretical aspects of synchronization theory but also to physicists and engineers interested in both the mathematical background and the methods for the asymptotic analysis of coupled infinite-dimensional dissipative systems that arise in continuum mechanics. The book can be used as a textbook for advanced courses in dissipative dynamics at the graduate level. To master the main ideas and approaches, it is sufficient to have a background in evolutionary equations and in introductory functional analysis. Some basic knowledge in PDEs and the theory of random processes would also be welcome.
Acknowledgments I am grateful to all my colleagues who have contributed to my understanding of the mathematical aspects of the synchronization phenomena. Kharkov, Ukraine March 2016
Igor Chueshov
Preface of the Second Author
Igor I met Igor for the first time in a workshop about random dynamical systems at the Institute of Dynamical Systems at the University of Bremen in the in the mid1990s. It was quickly clear to me that he was an excellent expert in the theory of dynamical systems generated by partial differential equations. He wanted to expand his field of work to stochastic partial differential equations and their dynamics, an area I was also interested in. We started to work together, exchanging ideas and discussing random pullback attractors for stochastic partial differential equations. Later, he visited the universities I used to work on several occasions. During that time, we also became good friends, so it was a tradition that once per week during his visits, we had a joint dinner in a restaurant, chatting about life, politics, and math. When we had the last of these dinners he asked me if I would share with him a typical German dish with fatty meat, which could only be ordered for two people. Although this kind of dish was not my favorite, I agreed. Later, when the waiter came, Igor said, Now I need a vodka. The waiter answered in German Einen Doppelten? Igor did not understand and answered A big one. So the waiter brought a fourfold vodka and Igor emptied the glass without any problem. Having this story in mind I was astonished that Igor asked for vacation a few weeks later for his editorial work for Stochastics and Dynamics owing to serious illness. I thought, Igor is a strong guy mentally and physically, he will overcome this illness quickly. But I was wrong. When I received a few weeks later the message that Igor had passed away, I could not believe it. But it was the bitter truth.
vii
viii
Preface of the Second Author
Some months later, Donna Chernyk from Springer Press contacted me. She told me that Igor had given the incomplete manuscript to one of his sons, hoping that, if he did not recover from his illness, one of his former collaborators could finish the book. It was clear to me that I would do this last favor for him. I found a wellstructured manuscript, where in the deterministic part (Part I) most of the work had been done by Igor. For the stochastic part (Part II), I have formulated an introduction on random processes and random dynamical systems and added two chapters to the book.
Acknowledgments The second author would like to thank Professor M.-J. Garrido-Atienza for proofreading the complete book. In addition, he would like to thank the Institute for Mathematics at the Friedrich Schiller University of Jena, and the Friedrich Schiller University of Jena in general for supporting the work on the manuscript by giving him a sabbatical. Jena, Germany Spring 2020
Bj¨orn Schmalfuß
Introduction
The focus of this book is the synchronization of infinite-dimensional systems. Roughly speaking, synchronization of a system given means that all its subsystems (or elements of the system) start to evolve in a “strongly” correlated way. We deal with qualitative dynamics abstract systems of coupled nonlinear (infinitedimensional) evolutionary equations. This kind of system may represent various interaction phenomena in a continuum medium. Our main idea can be described in the following way. Let X1 and X2 be Banach spaces. We consider the system of differential equations Ut = F1 (U,V ), t > 0, in X1 ,
(1a)
Vt = F2 (U,V ), t > 0, in X2 ,
(1b)
and
where F1 and F2 are continuous (nonlinear) mappings, F1 : X1 × X2 → X1 ,
F2 : X1 × X2 → X2 .
From a mathematical point of view, the synchronization phenomena can be treated as the existence of an invariant manifold of a special type in a phase space of the coupled system. For instance, if both equations in (1) have the same phase space (X1 = X2 ), then the possibility of synchronized regimes means that the set M = {(U,V ) ∈ X1 × X2 : U = V ∈ X1 = X2 }
(2)
is invariant with respect to the flow generated by the coupled system. This means that if initial data U0 and V0 for (1) are the same, then U(t) = V (t) for all t ≥ 0, i.e., in the system there exist trajectories (U(t),V (t)) such that the variables U and V have the same state. This situation admits at least two generalizations, both interesting from the point of view of synchronization studies. ix
x
Introduction
If this invariant set M is globally asymptotically stable, then given any solution (U(t),V (t)), the difference U(t) −V (t) becomes small as t → +∞. In this case, we observe full identical (asymptotic) synchronization of subsystems (1a) and (1b) and the question on the structure of a synchronized regime arises. This limiting regime can be described by attractors of a certain limiting equation related to the problem. The interacting system in (1) usually contains some control parameters that can be identified as the interaction intensity of the subsystems described by variables U and V . An important task is to find such conditions for those parameters that they would guarantee the presence of synchronized regimes. It is natural to assume that large intensities of interaction lead to synchronization. Therefore, it is important to understand how the qualitative dynamics appears in the limit of large intensities. The main objects in this dynamics are global attractors and hence the problem of describing the limiting attractor arises. In the synchronized regime, this attractor should possess a “diagonal” structure. Clearly, the detected presence of this structure means synchronization at the level of global attractors. This is a motivation for one of the approaches present in this book. Another approach is based on the observation that instead of the equality U(t) = V (t) in the synchronized regime, it is natural to consider a more general functional relation between U and V . This leads the question on the existence of an invariant manifold that has more of a general form than the one in (2). In particular, we can look for a manifold given by M := MΦ = {(U;V ) ∈ X1 × X2 : U = Φ (V ) ∈ X1 , V ∈ X2 } , where Φ : X2 → X1 is a Lipschitz mapping. The system in (1a) is said to be (asymptotically) synchronized with system (1b) if MΦ is asymptotically attracting in the sense that lim U(t) − Φ (V (t))X1 = 0
t→+∞
for any solution (U(t);V (t)) to problem (1). In this case, (1b) is called the master system and (1a) is the slave system. Thus, the master–slave synchronization problem can be reduced to the construction of an invariant asymptotically attracting Lipschitz manifold. To perform this construction in the infinite-dimensional case, we can use the theory of inertial manifolds, which was started with the paper Foias/Sell/Temam [79] and has been developed by many authors for both deterministic and stochastic cases (see, for example, Bensoussan/Flandoli [13], Chow/Lu [31], Chekhov [32, 34], Chueshov/Girya [42, 82], Chueshov/Lasiecka [44], Chueshov/ Scheutzow [51], Chueshov/Scheutzow/Schmalfuss [57], Constantin et al. [62], Mallet-Paret/Sell [112], Miklavˇciˇc [113], Mora [117], Romanov [142], Temam [161] and the references therein). There are two approaches to the construction of inertial manifolds: the Hadamard graph transform method (see, for example, Constantin et al. [62]) and the Lyapunov–Perron method (see Foias/Sell/Temam [79]). Each of them has its own advantage and we discuss both approaches. We also note that the idea of
Introduction
xi
general invariant manifolds was already used in the study of the synchronization of ordinary differential equation (ODE) systems (see, for example, Josi´c [97], Chow/Liu [30], Sun/Bollt/Nishikawa [160]). The existence of the inertial manifold MΦ makes it possible to prove that the long-time behavior of system (1) is determined by the reduced (so-called inertial) problem Vt = F2 (Φ (V ),V ) in X2 .
(2)
Thus, we have the possibility to describe the synchronized regime by modifying the master equation in (1b) and excluding the slave equation (1a) from consideration. The above motivations have their roots in the synchronization theory of finitedimensional systems. The notions of synchronized regimes presented above extend the corresponding standard finite-dimensional notions to a wider class of systems, even in the ODE situation. The point is that historically (and now traditionally), most studies in finite-dimensional synchronization deal with the so-called self-sustained oscillatory systems (see, for example, Balanov et al. [8], Osipov/Kurths/Zhou [126], Pikovsky/Rosenblum/Kurths [130], and the discussions therein). These systems are usually autonomous and characterized by the presence of stable oscillations in the subsystems. In this case, the synchronization is the adjustment of rhythms (frequency entrainment) of subsystems and is often referred to as phase synchronization. For instance, if a system consists of two weakly interacting (one-dimensional) subsystems, then using physically motivated hypotheses, we can derive closed equations for the corresponding phases (see Pikovsky/Rosenblum/Kurths [130] and the references therein). These equations have the form d φ1 = ω1 + q1 (nφ1 − mφ2 ), t > 0, dt
(3a)
d φ2 = ω2 + q1 (mφ2 − nφ1 ), t > 0, , (3b) dt where φ1,2 are phases of subsystem oscillations, ω1,2 are natural frequencies, q1,2 are 2π -periodic functions, and n and m are integers that characterize possible resonance conditions. We can consider the case m = n = 1 only. Indeed, introducing new phases ψ1 = nφ1 and ψ2 = mφ2 , we obtain the system d ψ1 = ω¯ 1 + q¯1 (ψ1 − ψ2 ), t > 0, dt d ψ2 = ω¯ 2 + q¯1 (ψ2 − ψ1 ), t > 0, dt where ω¯ 1 = nω1 , ω¯ 1 = mω2 , q¯1 (s) = nq1 (s), q¯2 (s) = mq2 (s). Thus, for the differences in the phases θ = ψ1 = ψ2 , we obtain the equation dθ + q(θ ) = 0 with q(θ ) = ω¯ 2 − ω¯ 1 + q¯2 (−θ ) − q¯1 (θ ). dt
xii
Introduction
If this equation has a stationary solution θ∗ , which is asymptotically stable, then for every initial data close to θ∗ , we have the convergence θ (t) → θ∗ as t → ∞. Thus, after returning to initial phase variables φ1 and φ2 , we obtain that |nφ1 (t) − mφ2 (t) − θ∗ | → 0 as t → ∞ for some class of solutions (φ1 (t), φ2 (t)) to (3). Thus, we observe some kind of master–slave synchronization,1 which is called the frequency entrainment, or phase locking in the engineering literature. If the coupling between two oscillating (1D) subsystems is relatively large, then (see Pikovsky/Rosenblum/Kurths [130, Section 8.2]) we can arrive at the system of the form x¨1 + ω12 x1 + d1 (x˙1 − x˙2 ) + k1 (x1 − x2 ) = f1 (x1 , x˙1 ),
(4a)
x¨2 + ω22 x1 + d2 (x˙2 − x˙1 ) + k2 (x2 − x1 ) = f2 (x2 , x˙2 ).
(4b)
There are two ways in the study of synchronization in this system. One way (see, for example, Pikovsky/Rosenblum/Kurths [130, Section 8.2] and the references therein) is using the averaging principle to separate the dynamics of the amplitudes and phases and look for conditions of frequency entrainment. Another way is to directly analyze the dynamics of the difference y(t) = x1 (t) − x2 (t) as t → ∞ looking for conditions concerning parameters d1,2 and k1,2 , which guarantee convergence to zero with appropriate rates. We use exactly this approach in this book. The main reason is that the phase equations for infinite-dimensional (PDE) systems have rather a complicated structure. Moreover, using this general approach we can extend the class of systems covered by the theory. Indeed, to analyze asymptotic similarity in the behavior of the functions x1 (t) and x2 (t) solving (4), we do not need the hypotheses on sustained oscillations of the subsystems. Thus, we can take into consideration systems with other scenarios of asymptotic behavior. However, we restrict ourselves to systems that demonstrate some kind of dissipative behavior and concentrate on the case of mutual synchronization, i.e., we do not consider synchronization of a system given an external source with prescribed dynamics. The aim of this work is to collect general, abstract results pertaining to synchronization properties related to the long-time behavior of solutions when t → ∞. In order to achieve a reasonable level of generality, the theoretical tools presented in the book are fairly abstract and tuned to general classes of deterministic and stochastic evolution equations, which are defined on abstract spaces. The most attention is paid to the systems with roots in Continuum Mechanics and Mathematical Physics. The subject of synchronization of coupled (identical or not) systems has received considerable attention. There are now several monographs in this field, see Balanov et al. [8], Leonov/Smirnova [109], Mosekilde/Maistrenko/Postnov [118], Osipov/Kurths/Zhou [126], Pikovsky/Rosenblum/Kurths [130], Strogatz [159], Wu
1
If n = m and θ∗ = 0, this is the identical synchronization.
Introduction
xiii
[168] and their extensive lists of references. All these sources concentrate mainly on finite-dimensional systems. As for the infinite-dimensional case, the results on synchronization are available at the paper level only (see, for example, Carvalho/Primo [24], Carvalho/Rodrigues/Dlotko [25], Rodrigues [140], Naboka [121– 123], Caraballo/Chueshov/Kloeden [20], Chueshov/Schmalfuss [54]). We also note that some publications on the synchronization of chaotic systems involve the so-called complete replacement synchronization attributed to Pecora/Carroll [128] (see also Chow/Liu [30], Pecora et al. [129], Tresser/Worfolk/Bass [163], and the references therein). This notion was primarily motivated by the security problem in communication theory (see the discussion in the survey by Pecora et al. [129]) and can be described as follows. Consider the dynamics of the system in (1) and assume that Y (t) = (U(t),V (t)) is a solution with some initial data Y0 = (U0 ,V0 ). Then, we define the so-called (Y0 ,U)-subordinate (nonautonomous) system ¯ (t)), t > 0, in X1 , U¯t = F1 (U,V
(5)
with some initial data U¯ 0 , where V (t) is the second component of the reference solution Y (t). According to Pecora/Carroll [128] system (1) demonstrates the complete replacement ( or drive-response) synchronization if for every initial data Y0 = (U0 ,V0 ) and U¯ 0 , we have that ¯ lim U(t) − U(t)) X1 = 0
t→+∞
(6)
In this case, the variable V is called the synchronizing coordinate, (1) is the drive system, and (5) is said to be the response system. This situation can be interpreted (see, for example, Pecora et al. [129]) in the following way. Assume we have two identical separate devices described by the same equations (1). Then we transmit a signal from the first to the second. Let this signal be the V -component of the first system. In the second system we replace the corresponding V¯ -component with the V -signal from the first system. This procedure is called complete replacement. The first system is the drive, the second is the response. The task of secure communications is whether we can predict the dynamics of the U-component of the first system by analyzing the output U¯ of equation (5). To answer this question we need to find conditions on the system that guarantee the convergence in (6). We cannot state that this complete replacement synchronization follows from the synchronization at the level of global attractors or from the master–slave synchronization described above. Nevertheless, as we will see, the conditions that we need either for the global attractors’ synchronized regime or for the master–slave synchronization, usually imply the Pecora–Carroll phenomenon. Thus, in some sense the complete replacement (or drive-response) synchronization is the weakest notion among others discussed above. The book consists of two parts, and each part consists of two chapters. The first part is devoted to deterministic problems, the second part deals with synchroniza-
xiv
Introduction
tion in stochastic systems. Part I is completely independent of Part II. Therefore, everybody who wants to can restrict themselves to the deterministic Part I. Moreover, in order to build a bridge between finite-dimensional and infinite-dimensional results in Part I, we discuss several finite-dimensional models in relation to their infinite-dimensional analogs. Chapter 1 deals with synchronization at the level of global attractors. We start with a preliminary section devoted to basic facts from the theory of dissipative systems and their attractors. Then, we switch to the abstract version of the coupled parabolic problem, discussing the influence of several types of interaction with the main concentration on systems with linear coupling operators. We present several general results that constitute an approach to synchronization at the level of global attractors for parabolic systems. Then, the abstract results are applied to parabolic PDEs with interior and boundary coupling. We also discuss cross-diffusion models from the chemical kinetics and the coupled Hodgkin–Huxley model of the transmission of nerve impulses and consider synchronization phenomena in a reaction– diffusion system on a thin two-layer domain with a coupling via an internal boundary. We conclude this chapter with second order in time coupled systems, which are abstract models for several classes of PDEs from Continuum Mechanics, describing wave and elastic oscillation phenomena. As examples, we consider several plate models and the sine-Gordon system, which describes distributed Josephson junctions. The main topic of Chap. 2 is the master–slave synchronization, which we interpret as a problem on the existence of invariant asymptotically stable manifold in the phase space of a coupled system. Following the standard idea of the Lyapunov– Perron method in the form suggested in Miklavˇciˇc [113] for the infinite-dimensional case, we first consider coupled semilinear equations. These models have linear main parts that satisfy some sort of a spectral gap condition. This condition leads to a dichotomy for the corresponding linear evolutions, allowing us to separate the dynamics of the subsystems. This makes it possible to formulate the requirements for the presence of master–slave synchronization in the system. Our applications include systems with a linear main part and consisting of coupled (i) parabolic and hyperbolic equations, (ii) two hyperbolic equations, and (iii) Klein–Gordon and Schr¨odinger equations. To cover the case of the nonlinear main part, we also develop an extension of the Hadamard graph method based on some generalized dichotomy hypothesis concerning the subsystem evolution. This generalized dichotomy allows us to avoid a global Lipschitz assumption concerning the nonlinearity in the model and does not assume linearity of the main part. Then, we concentrate on a case study of a thermoelastic problem by showing that the temperature is a slave variable with respect to the displacement. Although the model still has a linear main part, we cannot apply the previously established results for this model directly. The main reason is that nonlinearity contains some singular term that requires a separate consideration. We conclude Chap. 2 with a discussion of synchronization of higher modes by lower modes in a semilinear parabolic-type model. In fact, the corresponding result follows directly from the theory of inertial manifolds. However, we show that the same result follows by the method developed in this chapter.
Introduction
xv
The second part of the book deals mainly with stochastic perturbations of the models considered in the first part. Part II assumes some basic knowledge of stochastic analysis, which is, however, included in standard courses. However, at the beginning of Chap. 3, we formulated some facts pointing in the direction of random dynamical systems, including basic facts from the theory of random processes and ergodic theory. We rely on the general theory of random dynamical systems (see Arnold [4]) and deal essentially with additive stochastic perturbations. Although it is possible to consider other kinds of randomness to model stochastic environment phenomena, the main reason that justifies the use of additive noise is that it usually models background and small effects that have been omitted or neglected in a deterministic modeling procedure. In this respect, from the physical point of view, it is important to know how the qualitative properties of the simplified (deterministic) model depend on perturbations by additive noises. At the end of Chap. 4, we deal with equations where the linear part depends on some noise that differs from the additive part. In the stochastic part, we exploit the deterministic ideas presented in Part I. However, their implementation require a serious modification of technique and imposition of different sets of hypotheses. The readers of the second part can consult Part I for the general schemes in case of difficulties. In principle, many results of the stochastic part can be directly applied in the deterministic case, i.e., results of the second part imply some results presented in Part I. However, we prefer not to optimize the narrative in this way. Our idea is to start with simple structures and then switch to more complicated stochastic models. Chapter 3 is devoted to synchronization at the level of random attractors. In some sense, this is a stochastic analog of Chap. 1. We first recall basic facts from the theory of random dynamical systems and their random pullback attractors. Then, we consider stochastic perturbations of parabolic equations and the stochastic version of the two-layer problem in thin domains. We show that for these stochastic systems a stronger synchronization effect is observed. This follows from a structural result on pullback attractors for order-preserving (monotone) random dynamical systems (see Chueshov/Scheutzow [52] and also the recent paper by Flandoli/Gess/Scheutzow [76], [77], Rosati [143]). This structural result states that for the case of nondegenerate noise, a pullback attractor for a monotone random dynamical system is a single point and thus we observe synchronization, not only at the level of global attractors but also for individual trajectories. This effect is not true for the deterministic counterpart. As an example of the second order in time stochastic models, we also consider synchronization phenomena in N coupled sine-Gordon equations perturbed by infinite-dimensional white noise. Chapter 4 deals with the master–slave synchronization for stochastic coupled systems. As in Chap. 2, we apply the inertial manifold theory. For stochastic systems, this method was started with papers by Bensoussan/Flandoli [13], and Chueshov/Girya [42, 82] and then developed and discussed by many other authors. Here, we adapt the Lyapunov–Perron–Miklavˇciˇc approach to stochastic systems, ap-
xvi
Introduction
parently requiring the spectral gap conditions in the weakest form. As applications, we consider stochastic perturbations of the coupled models discussed in Chap. 2. However, in contrast to the deterministic case (see Chap. 2), we deal with semilinear models only. At the present time, we do not know how to extend this (master–slave) approach to stochastic models with a nonlinear main part. In the last section of this book, we study master–slave synchronization for a system of equations containing unbounded linear operators, generating a linear random dynamical system. We apply the random graph transform, including a random fixed point argument to obtain a random inertial manifold, allowing us to conclude master–slave synchronization.
Contents
Part I Deterministic Systems 1
Synchronization of Global Attractors and Individual Trajectories . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Preliminaries on Global Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Basic Notions, Dissipativity, and Asymptotic Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Global Attractors: Existence and Basic Properties . . . . . . . . . 1.2.3 Quasi-Stable Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Gronwall Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Coupled Parabolic Problems: Abstract Models . . . . . . . . . . . . . . . . . . 1.3.1 Model and Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Asymptotic Synchronization of a Fixed Trajectory . . . . . . . . 1.3.3 Linear Coupling: Well-Posedness and Global Attractors . . . . 1.3.4 Linear Coupling: Complete Replacement Synchronization . . 1.3.5 Linear Coupling: Synchronization of Global Attractors . . . . . 1.3.6 Synchronization in Delay Systems . . . . . . . . . . . . . . . . . . . . . . 1.3.7 Synchronization by Means of Finite-Dimensional Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Reaction–Diffusion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Coupling Inside Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Quasi-Stationary Sine-Gordon Model . . . . . . . . . . . . . . . . . . . 1.4.3 A Model from Chemical Kinetics: Cross-Diffusion . . . . . . . . 1.4.4 Coupling in the Transmission of Nerve Impulses: Hodgkin–Huxley Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.5 Coupling on the Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 A Case Study: Two-Layer Problem in Thin Domains . . . . . . . . . . . . . 1.5.1 Dynamics for the Fixed Thickness ε . . . . . . . . . . . . . . . . . . . . 1.5.2 Limiting Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 4 5 7 12 15 16 16 20 23 30 32 40 51 55 55 57 60 63 68 71 73 77 xvii
xviii
Contents
1.5.3 Thin-Limit Behavior and Asymptotic Synchronization . . . . . 78 1.5.4 Synchronization for Fixed ε > 0 . . . . . . . . . . . . . . . . . . . . . . . . 81 1.6 Synchronization in Elastic/Wave Structures . . . . . . . . . . . . . . . . . . . . . 84 1.6.1 The Abstract Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 1.6.2 Global Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 1.6.3 Quasi-Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 1.6.4 Asymptotic Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 1.6.5 On Synchronization by Means of Finite-Dimensional Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 1.6.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 2
Master–Slave Synchronization via Invariant Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 2.2 Semilinear Case (Linear Dichotomy) . . . . . . . . . . . . . . . . . . . . . . . . . . 118 2.2.1 Main Hypotheses and Generation of a Dynamical System . . 118 2.2.2 The Basic Idea of the Lyapunov–Perron Method . . . . . . . . . . 122 2.2.3 Existence of a Synchronization (Invariant) Manifold . . . . . . . 124 2.2.4 Coupled Parabolic–Hyperbolic System . . . . . . . . . . . . . . . . . . 133 2.2.5 Coupled PDE/ODE Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 2.2.6 Two Coupled Hyperbolic Systems . . . . . . . . . . . . . . . . . . . . . . 137 2.2.7 Coupled Klein–Gordon–Schr¨odinger System . . . . . . . . . . . . . 140 2.3 Quasilinear Case (Nonlinear Dichotomy) . . . . . . . . . . . . . . . . . . . . . . . 141 2.3.1 Statement of Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 2.3.2 Hadamard Graph Transform Method . . . . . . . . . . . . . . . . . . . . 144 2.3.3 Application: Coupled Parabolic–Hyperbolic System Revised . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 2.4.1 Abstract Form of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 2.4.2 Generation of a Dynamical System . . . . . . . . . . . . . . . . . . . . . 157 2.4.3 Invariant Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 2.4.4 Reduced System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 2.5 Synchronization in Higher Modes and Inertial Manifold . . . . . . . . . . 175
Part II Stochastic Systems 3
Stochastic Synchronization of Random Pullback Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 3.1 Basic Stochastics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 3.1.1 Random Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 184 3.1.2 Random Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 3.1.3 Stochastic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 3.1.4 Order-Preserving RDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme . . . 215
Contents
xix
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 3.3.1 Random Dynamics in the Two-Layer Model . . . . . . . . . . . . . . 228 3.3.2 The Statement on Synchronization . . . . . . . . . . . . . . . . . . . . . . 233 3.3.3 Existence of Random Pullback Attractors . . . . . . . . . . . . . . . . 235 3.3.4 Limit Transition on Finite Time Intervals . . . . . . . . . . . . . . . . 240 3.3.5 Upper Semicontinuity of Attractors . . . . . . . . . . . . . . . . . . . . . 242 3.3.6 Synchronization for Fixed ε > 0 . . . . . . . . . . . . . . . . . . . . . . . . 243 3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model . . 245 3.4.1 Abstract Model and Main Hypotheses . . . . . . . . . . . . . . . . . . . 246 3.4.2 Ornstein–Uhlenbeck Processes Generated by Second-Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 3.4.3 Random Evolution Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 3.4.4 Global Random Attractors: Dissipativity . . . . . . . . . . . . . . . . . 252 3.4.5 Global Random Attractors: Quasi-Stability . . . . . . . . . . . . . . . 257 3.4.6 Upper Semicontinuity and Synchronization . . . . . . . . . . . . . . 261 3.4.7 Synchronization for Finite Values of the Interaction Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 3.4.8 Synchronization by Means of Finite-Dimensional Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 3.4.9 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 4
Master–Slave Synchronization in Random Systems . . . . . . . . . . . . . . . . 269 4.1 General Idea of the Random Invariant Manifold Method . . . . . . . . . . 269 4.1.1 Hypotheses and Auxiliary Facts . . . . . . . . . . . . . . . . . . . . . . . . 270 4.1.2 Mild Solutions and Generation of an RDS . . . . . . . . . . . . . . . 274 4.1.3 Existence of an Invariant Manifold . . . . . . . . . . . . . . . . . . . . . . 275 4.1.4 The Reduced System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 4.1.5 Distance Between Random and Deterministic Manifolds . . . 285 4.1.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 4.2 Master–Slave Synchronization for Equations with a Random Linear Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 4.2.1 Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 4.2.2 The Random Evolution Equation . . . . . . . . . . . . . . . . . . . . . . . 298 4.2.3 The Random Graph Transform . . . . . . . . . . . . . . . . . . . . . . . . . 303
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Part I
Deterministic Systems
Chapter 1
Synchronization of Global Attractors and Individual Trajectories
1.1 Introduction In this chapter we mainly discuss phenomena in the synchronization of deterministic dissipative systems at the level of global attractors. This type of synchronization means that all dynamical (phase) components of a coupled system are attracted by limiting structures of the same form. The typical result in this chapter can be illustrated by the following the ordinary differential equation (ODE) example. Consider the following version of coupled identical active rotators1 u˙ + γ u + a sin u + κ(u − v) = g,
(1.1.1a)
v˙ + γ v + a sin v + κ(v − u) = g, t > 0,
(1.1.1b)
where γ , a > 0 and g ∈ R are fixed parameters, the parameter κ ≥ 0 describes the intensity of interaction of the rotators. One can see from the considerations below that the problem in (1.1.1) generates a dynamical system on the plane X = R2 . This system is a gradient system (see Definition 1.2.8 below) and possesses a global attractor Aκ , which is then a strictly invariant uniformly attracting set in X (see Definition 1.2.5). It follows from the results presented below that in the case of absence of the interaction (κ = 0), this attractor is the square A0 = Q := [ψmin , ψmax ] × [ψmin , ψmax ] ⊂ X = R2 , where
ψmin = min{ψ ∈ R : γψ + a sin ψ = g} 1
We refer the reader to Osipov/Kurths/Zhou [126] and Pikovsky/Rosenblum/Kurths [130] for more information concerning this model. © Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3 1
3
4
1 Synchronization of Global Attractors and Individual Trajectories
and
ψmax = max{ψ ∈ R : γψ + a sin ψ = g}. These ψmin ≤ ψmax exist because γ > 0. On the other hand, it follows from Theorem 1.3.20 below that for all κ large enough the attractor Aκ is the diagonal of the square Q, i.e., there exists κ∗ > 0 such that Aκ = {(ψ , ψ ) : ψ ∈ [ψmin , ψmax ]} , κ ≥ κ∗ .
(1.1.2)
This structure of Aκ is exactly what we call the synchronization (of the rotators u and v) at the level of global attractors: • in the absence of interaction (κ = 0), each rotator demonstrates independent dynamics, and the global attractor for (1.1.1) with κ = 0 is a direct product of two sets; • if κ is large enough, then the dynamics of rotators is completely correlated, it follows from (1.1.2) that |u(t) − v(t)| → 0 as t → ∞ for every pair of initial data (u(0), v(0)). A similar effect can be seen when the rotators have different values for the parameters γ , a. However, in this case the diagonal structure of the attractor Aκ arises in the limit κ → ∞ only. The main goal in this chapter is to present some results and methods concerning synchronization at the level of global attractors. We deal with both parabolic and hyperbolic semilinear systems. The parabolic systems describe synchronization phenomena in reaction–diffusion processes. The hyperbolic models are assigned to interacting elastic/wave structures. The approach applied requires two issues to be investigated. The first issue is the existence problem of a global attractor. The second issue is the dependence of the attractors on parameters that describe this interaction between subsystems.
1.2 Preliminaries on Global Attractors We recall several basic notions and facts related to the study of the long-time behavior of dynamical dissipative systems. For a detailed general discussion of the long-time behavior of systems with dissipation, we refer the reader to one of the monographs Babin/Vishik [7], Chepyzhov/Vishik [28], Chueshov [36, 40] , Hale [85], Ladyzhenskaya [103], Robinson [138], Sell/You [138], Temam [161].
1.2 Preliminaries on Global Attractors
5
1.2.1 Basic Notions, Dissipativity, and Asymptotic Compactness By definition, a dynamical system (with continuous time) is a pair of objects: a complete metric space X and a family of continuous mappings φ = {φ (t, ·) : X → X : t ∈ R+ } of X fulfilling the semigroup properties:
φ (0, ·) = idX ,
φ (t + τ , ·) = φ (t, ·) ◦ φ (τ , ·), t, τ ≥ 0.
We also assume that y(t) = φ (t, y0 ) is continuous with respect to t for any y0 ∈ X. The complete metric space X is called a phase space (or state space) and φ is often called a semigroup. The standard example of a dynamical system is provided by a well-posed autonomous differential evolution equation of the form (1.2.1) x˙ = F(x), t > 0, xt=0 = x0 , in some Banach space X. In this case, the dynamical system is defined by the relation φ (t, x0 ) = x(t), where x(t) solves (1.2.1) uniquely on R+ . A set 0/ = D ⊂ X is said to be forward (or positively) invariant if φ (t, D) ⊂ D for all t ≥ 0. It is backward (or negatively) invariant if φ (t, D) ⊃ D for all t ≥ 0. The set D is said to be invariant if it is both forward and backward invariant; that is, φ (t, D) = D for all t ≥ 0. The set
γDt :=
φ (τ , D)
τ ≥t
is called the tail (from the moment t) of the trajectories emanating from D. It is clear that γDt = γφ0(t,D) . If D = {v} is a single element set, then γv+ := γD0 is said to be a positive semitrajectory (or semiorbit) emanating from v. A continuous curve γ := {u(t) : t ∈ R} in X is said to be a full trajectory if φ (t, u(τ )) = u(t + τ ) for any τ ∈ R and t ≥ 0. Because φ (t, ·) is not necessarily an invertible operator, a full trajectory may not exist. Semitrajectories are forward invariant sets. Full trajectories are invariant sets. The concept of an ω -limit set is important in the study of global dynamics. The set
ω (D) :=
t>0
γDt =
φ (τ , D)
t>0 τ ≥t
is called the ω -limit set of the trajectories emanating from D. It is equivalent (see, for example, Temam [161] or Chueshov [40]) to saying that x ∈ ω (D) if and only if there exist sequences tn → +∞ and xn ∈ D such that φ (tn , xn ) → x as n → ∞. It is clear that ω -limit sets (if they exist) are forward invariant. Definition 1.2.1. Let φ be a dynamical system on a complete metric space X. • A closed set B ⊂ X is said to be absorbing for φ if for any bounded set D ⊂ X there exists t0 (D) such that φ (t, D) ⊂ B for all t ≥ t0 (D).
6
1 Synchronization of Global Attractors and Individual Trajectories
• φ is said to be (bounded) dissipative2 if it possesses a bounded absorbing set B. If the phase space X is a Banach space, then the radius of a ball containing an absorbing set is called a radius of dissipativity. • φ is said to be point dissipative if there exists B0 ⊂ X such that for any x ∈ X there is t0 (x) such that φ (t, x) ∈ B0 for all t ≥ t0 (x). Theorem 1.2.2 (Criterion of Dissipativity). Let φ be a continuous dynamical system in some Banach space X. Assume that: • There exists a continuous function U(x) on X possessing the properties
φ1 (x) ≤ U(x) ≤ φ2 (x), ∀ x ∈ X, where φi are continuous functions on R+ such that φi (r) → +∞ as r → +∞; • There exist a derivative dtd U(φ (t, y)) for every t > 0 and y ∈ X and positive numbers α and ρ such that d U(φ (t, y)) ≤ −α provided φ (t, y) > ρ . dt Then the dynamical system (X, φ ) is dissipative with an absorbing set of the form B∗ = {x ∈ X : x ≤ R∗ } ,
(1.2.2)
where the constant R∗ depends on the functions φ1 and φ2 and the constant ρ only. Proof. This is an application of the Lyapunov function method along with the barrier technique, see Chueshov [36] or [40], for instance. Example 1.2.3 (Dissipative System). With the reference to (1.2.1) assume that X is a Hilbert space and F satisfies the coercivity condition ∃ α > 0, β ≥ 0 : 2(x, F(x)) ≤ −α x2 + β , ∀ x ∈ X. Then the system generated by (1.2.1)is dissipative with the absorbing set (1.2.2), where R∗ is any number greater than β /α . All studies of long-time dynamics of infinite-dimensional systems require some compactness properties. We give the following definition. Definition 1.2.4. Let φ be a dynamical system in a complete metric space X. • φ is said to be compact if it is dissipative and there is a compact absorbing set. • φ is said to be conditionally compact if for every forward invariant bounded set D there exists a compact set KD such that φ (t, D) ⊂ KD for all t large enough.
2
From now on, we use “dissipative” for short.
1.2 Preliminaries on Global Attractors
7
• φ is said to be asymptotically compact if the following Ladyzhenskaya condition (see Ladyzhenskaya [103] and the references therein) holds: for any bounded set B in X such that the tail γBτ is bounded for some τ ≥ 0 we have that any sequence of the form {φ (tn , xn )} with xn ∈ B and tn → ∞ is relatively compact. It is easy to see that any compact or conditionally compact system is asymptotically compact. Moreover, one can show that asymptotic compactness is equivalent to asymptotic smoothness (see Raugel [135] or Chueshov [40]). We recall that a dynamical system φ is said to be asymptotically smooth if the following condition (see, for example, Hale [85]) is valid: for every bounded forward invariant set D there exists a compact set K in the closure D of D, such that φ (t, D) converges uniformly to K in the sense that lim dX (φ (t, D), K) = 0,
t→+∞
where dX (A, B) = sup dX ({x}, B) := sup dX (x, B) x∈A
x∈A
= sup inf dX (x, y). x∈A y∈B
If the phase space is locally compact (e.g., X = Rd ), then all compactness properties mentioned above follow from dissipativity. There are many criteria for asymptotic compactness/smoothness. We refer the reader to the monographs Babin/Vishik [7], Chepyzhov/Vishik [28], Chueshov [36, 40], Hale [85], Ladyzhenskaya [103], Robinson [138], Temam [161],see also the discussions in Chueshov/Lasiecka [46, 47] and also the references cited therein.
1.2.2 Global Attractors: Existence and Basic Properties The main objects arising in the analysis of the long-time behavior of dissipative dynamical systems are attractors. Several definitions of an attractor are available (see, for example, Chueshov [36, Section 1.3] for a number of discussions). From the point of view of infinite-dimensional systems the most convenient concept is a global attractor. Definition 1.2.5 (Global Attractor). Let φ be a dynamical system in a complete metric space X. A compact set 0/ = A ⊂ X is said to be a global attractor for φ if (i) A is an invariant set; that is, φ (t, A) = A for t ≥ 0. (ii) A attracts bounded sets; that is, lim dX (φ (t, D), A) = 0 for every bounded set D ⊂ X.
t→+∞
8
1 Synchronization of Global Attractors and Individual Trajectories
The standard basic result on the existence of a global attractor is the following assertion (for the proof see one of the sources mentioned above). Theorem 1.2.6 (Existence of a Global Attractor). Let φ be a dissipative asymptotically compact dynamical system on a complete metric space X. Then, φ possesses a unique global attractor A such that A = ω (B0 ) =
φ (τ , B0 )
t>0 τ ≥t
for every bounded absorbing set B0 and lim (dX (φ (t, B0 ), A) + dX (A, φ (t, B0 )) = 0.
t→+∞
If we assume that X is a connected space, then A is connected. The study of the structure of the global attractors is an important problem from the point of view of applications. There do not exist universal approaches to this problem. It is well known that even in finite-dimensional cases a global attractor can possess an extremely complicated structure. However, some sets that belong to the global attractor can be easily pointed out. For example, every stationary point and every bounded full trajectory belong to the global attractor. The global attractor may contain some unstable motions that can be introduced by the following definition (see, for example, Babin/Vishik [7] or Temam [161]). Definition 1.2.7 (Unstable Manifold). Let N be the set of stationary points of the dynamical system φ : N = {v ∈ X : φ (t, v) = v for all t ≥ 0} . We define the unstable manifold M u (N ) emanating from the set N as a set of all y ∈ X such that there exists a full trajectory γ = {u(t) : t ∈ R} with the properties u(0) = y and lim dX (u(t), N ) = 0. t→−∞
If the system is dissipative, then the unstable manifold consists of bounded full trajectories γ . Thus, if the global attractor A exists, then M u (N ) ⊆ A. Indeed, full trajectories are invariant with respect to φ and hence
ε > dX (φ (t, γ ), A) = dX (γ , A) where the first inequality holds for t > T (γ , ε ). We can choose an arbitrary positive ε and hence dX (γ , A) = 0, which gives the conclusion. Now we consider the so-called gradient systems. They can be introduced by the following definition (see, for example, Babin/Vishik [7] or Temam [161]).
1.2 Preliminaries on Global Attractors
9
Definition 1.2.8 (Lyapunov Function and Gradient System). Let Y ⊆ X be a forward invariant set of a dynamical system φ . • The continuous functional Φ (y) defined on Y is said to be the Lyapunov function for the dynamical system φ on Y if the function t → Φ (φ (t, y)) is a nonincreasing function for any y ∈ Y . • The Lyapunov function Φ (y) is said to be strict on Y if the equation Φ (φ (t, y)) = Φ (y) for all t > 0 and for some y ∈ Y implies that φ (t, y) = y for all t > 0; that is, y is a stationary point of φ . • The dynamical system φ is said to be gradient if there exists a strict Lyapunov function for φ defined on the whole phase space X.
Example 1.2.9 (Gradient System). With reference to the system generated by(1.2.1), assume that X = RN and F : RN → RN is a gradient of some smooth function Φ (x), i.e., F(x) = −∇Φ (x). Then, Φ (x) is a strict Lyapunov function and thus the system is gradient. The following result on the structure of a global attractor is known from many sources (see the references above). Theorem 1.2.10 (Structure of Global Attractors). Let a dynamical system φ possess a global attractor A. Assume that there exists a strict Lyapunov function on A. Then, A = M u (N ), where M u (N ) denotes the unstable manifold emanating from the set N of stationary points. Moreover, the global attractor A consists of full trajectories γ = {u(t) : t ∈ R} such that lim dX (u(t), N ) = 0 and
t→−∞
lim dX (u(t), N ) = 0.
t→+∞
(1.2.3)
The following criterion of the existence of a global attractor for gradient systems (see, for example, Raugel [135, Theorem 4.6]) is useful in many applications. Theorem 1.2.11. Let φ be an asymptotical compact gradient system that has the property that for any bounded set D ⊂ X there exists τ > 0 such that the tail γτ (D) := ∪t≥τ φ (t, D) is bounded. If the set N of stationary points is bounded, then φ has a global attractor A for which the statement of Theorem 1.2.10 holds. Using Theorem 1.2.11 we can obtain the following assertion (see Chueshov/Lasiecka [46, Corollary 2.29]). Theorem 1.2.12 (Global Attractors for Gradient Systems). Assume that φ is a gradient asymptotical compact dynamical system. Assume its Lyapunov function Φ (x) is bounded from above on any bounded subset of X and the set ΦR = {x : Φ (x) ≤ R} is bounded for every R. If the set N of stationary points of φ is bounded, then φ possesses the global attractor A given by M u (N ). Moreover, • A consists of full trajectories γ = {u(t) : t ∈ R} such that (1.2.3) holds.
10
1 Synchronization of Global Attractors and Individual Trajectories
• Every trajectory stabilizes to the set N of stationary points, i.e., lim dX (φ (t, x), N ) = 0, ∀ x ∈ X.
t→+∞
• If N = {z1 , . . . , zn } is a finite set, then A = ∪ni=1 M u (zi ), where M u (zi ) is the unstable manifold of the stationary point zi , and also (i) The global attractor A consists of full trajectories γ = {u(t) : t ∈ R} connecting pairs of stationary points: any u ∈ A belongs to some full trajectory γ and for any γ ⊂ A there exists a pair {z, z∗ } ⊂ N such that u(t) → z as t → −∞ and u(t) → z∗ as t → +∞. (ii) For any v ∈ X there exists a stationary point z such that φ (t, v) → z as t → +∞. The following observation is important for searching of optimal bounds for a global attractor. Remark 1.2.13 (Bounds for Attractors). It follows from the first equality in (1.2.3) that under the hypotheses of Theorem 1.2.10 the following relation is valid, sup{Φ (u) : u ∈ A} ≤ sup{Φ (u) : u ∈ N },
(1.2.4)
where Φ (u) is the corresponding Lyapunov function. If Φ (u) topologically dominates3 the metric of the phase space X, then the inequality in (1.2.4) can be used in order to provide an upper bound for the size of the attractor and an absorbing ball. This observation can be applied to obtain uniform (with respect to the parameters of the problem) bounds for the attractor, which can be important for the localization of limiting objects. We apply this idea in the study of synchronization phenomena dealing with the stability of attractors with respect to coupling parameters. To describe the stability of attractors with respect to parameters at the abstract level we consider a family of dynamical systems φ λ with the same phase space X depending on a parameter λ from a complete metric space Λ . The following assertion is proved in Kapitansky/Kostin [98] (see also Babin/Vishik [7], Hale [85] and Chueshov [36]). Theorem 1.2.14 (Upper Semicontinuity of Attractors). Assume that a dynamical system φ λ in a complete metric space X possesses a global attractor Aλ for every λ ∈ Λ . Assume that the following conditions hold. (i) There exists a compact K ⊂ X such that Aλ ⊂ K for all λ ∈ Λ .
For instance, in the case of second order in time models the functional Φ (u) can be the energy of the corresponding system. In many cases this functional provides a bound from above for the corresponding free energy. For a wide class of second-order models the square root of this free energy gives the norm in the phase space (see Sect. 1.6 for some details).
3
1.2 Preliminaries on Global Attractors
11
(ii) If λk → λ0 , xk → x0 and xk ∈ Aλk , then
φ λk (τ , xk ) → φ λ0 (τ , x0 ) for some τ > 0.
(1.2.5)
Then the family {Aλ } of attractors is upper semicontinuous at the point λ0 ; that is, dX (Aλ , Aλ0 ) → 0 as λ → λ0 .
(1.2.6)
Moreover, if (1.2.5) holds for every τ > 0, then the upper limit A(λ0 , Λ ) of the attractors Aλ at λ0 defined by the formula A(λ0 , Λ ) =
Aλ : λ ∈ Λ , 0 < dΛ (λ , λ0 ) < δ
(1.2.7)
δ >0
is a nonempty compact strictly invariant set lying in the attractor Aλ0 and possessing the property dX (Aλ , A(λ0 , Λ )) → 0 as λ → λ0 . If we assume a stronger convergence property of dynamical systems than is assumed in (1.2.5), then we can avoid the compactness hypothesis in (i). Namely, we can prove (see Robinson [138]) the following assertion. Proposition 1.2.15. Let X be a complete metric space and φ λ be a family of dynamical systems on X possessing global attractors Aλ for λ ∈ Λ . Assume that • The attractors Aλ are uniformly bounded, i.e., there exists a bounded set B0 such that Aλ ⊂ B0 ; • There exists t0 ≥ 0 such that φ λ (t, x) → φ λ0 (t, x) as λ → λ0 for each t ≥ t0 uniformly with respect to x ∈ B0 , i.e., sup dX (φ λ (t, x), φ λ0 (t, x)) → 0 as λ → λ0 .
x∈B0
Then, the family {Aλ } of global attractors is upper semicontinuous at the point λ0 , i.e., the relation in (1.2.6) holds. Example 1.2.16 (Upper Semicontinuity). This example was suggested in Raugel [135] (see also Chueshov [40, Section 2.3.4]). Consider a dynamical system generated in R by the equation
φ˙ = (1 − φ )(φ 2 − λ ), t > 0. We take λ ∈ Λ = [−1, 1]. One can see that the global attractor Aλ has the form √ [− λ , 1] for λ ≥ 0; Aλ = {1}, for λ < 0. This attractor is upper semicontinuous for every λ0 ∈ Λ , i.e., dX (Aλk , Aλ0 ) → 0 as λk → λ0 . However, dX (A0 , Aλk ) = 1 as λk → −0, which means that Aλ is not (fully)
12
1 Synchronization of Global Attractors and Individual Trajectories
continuous at λ = 0. Moreover, A(0, [−1, 0]) = {1} = A0 , where A(0, [−1, 0]) is the upper limit defined according to (1.2.7). We also note that in order to prove the lower semicontinuity in the framework of Theorem 1.2.14, we need to impose additional hypotheses concerning the system (see the discussion in Chueshov [40, Section 2.3.4]).
1.2.3 Quasi-Stable Systems We conclude these preliminary considerations with a short description of the idea of the quasi-stability method in dissipative long-time dynamics. We refer the reader to Chueshov [40] for a presentation of this method. Definition 1.2.17 (Quasi-Stability). Let φ be a dynamical system on some Banach space X. This system is said to be quasi-stable (at some time) on a set B ⊂ X if there exist (a) a Banach space Z, (b) a globally Lipschitz mapping K : X → Z, and (c) a compact seminorm4 nZ (·) on the space Z, such that φ (t∗ , y1 ) − φ (t∗ , y2 )X ≤ q · y1 − y2 X + nZ (Ky1 − Ky2 )
(1.2.8)
for every y1 , y2 ∈ B with 0 ≤ q < 1 and with some t∗ > 0. We emphasize that the space Z, the operator K, the seminorm nZ and the moment t∗ may depend on B. As was already mentioned (see, for example, Chueshov [40] or Chueshov /Lasiecka [49] and the references therein) the definition of quasi-stability is rather natural from the point of view of long-time behavior. It pertains to the decomposition of the flow into exponentially stable and compact parts. This represents some sort of analogy with the “splitting” method in Babin/Vishik [7] or Temam [161]; however, the decomposition refers to the difference between two trajectories, rather than a single trajectory. It is worth mentioning that in the degenerate case when nZ ≡ 0, the relation in (1.2.8) transforms into the following one φ (t∗ , y1 ) − φ (t∗ , y2 )X ≤ q · y1 − y2 X for every y1 , y2 ∈ B. If B is a forward invariant set, then this implies that the semigroup φ possesses a unique fixed exponentially attracting point in the closure B¯ (see Chueshov [40] for details). Illustrations and a survey of the quasi-stability method can be found in Chueshov [40] (see also the references in this source). 4
We recall that a seminorm n(x) on a Banach space X is said to be compact if any bounded sequence {xm } ⊂ X contains a subsequence {xmk }, which is Cauchy with respect to n, i.e., n(xmk − xml ) → 0 as k, l → ∞.
1.2 Preliminaries on Global Attractors
13
A sufficient condition for quasi-stability that can be applied in the case of parabolic (compact) systems can be formulated as follows: Proposition 1.2.18. Let φ be a dynamical system on some Banach space X and X0 be another Banach space that is compactly embedded in X. If for some set B ⊂ X there exists t∗ > 0 such that φ (t∗ , B) ⊂ X0 and for some constant C > 0 we have that φ (t∗ , y1 ) − φ (t∗ , y2 )X0 ≤ C · y1 − y2 X for every y1 , y2 ∈ B, then the system φ is quasi-stable on B at time t∗ . Proof. We take q = 0, Z = X0 , nZ (x) = Jx, where J is the embedding operator of X0 into X. The quasi-stability property implies the existence of a global attractor, see, for example, Chueshov [40] for details. Theorem 1.2.19 (Global Attractor). Let a dynamical system φ be quasi-stable on every bounded forward invariant set B in X at some moment in time. Then, φ is asymptotically smooth. Moreover, if the system φ is dissipative, then this system possesses a global attractor. The quasi-stability also implies finite-dimensionality of a global attractor. We first recall the following definition. Definition 1.2.20 (Fractal Dimension). Let M be a compact set in a metric space X. The fractal dimension dim f M of M is defined by dim f M = lim sup ε →0
ln n(M, ε ) , ln(1/ε )
where n(M, ε ) is the minimal number of closed balls of the radius ε that cover the set M. We can also consider the Hausdorff dimension dimH to describe complexity and embedding properties of compact sets. We do not give a formal definition of this dimension characteristic (see, for example, Falconer [71] for some details and references) and we only note that: (i) The Hausdorff dimension does not exceed (but is not equal to, in general) the fractal one; (ii) The fractal dimension is more convenient in calculations. The following assertion was proved in Chueshov [40]. Theorem 1.2.21 (Finite-Dimensional Attractor). Assume that a system φ possesses a global attractor A. In addition, φ is supposed to be quasi-stable on A (see Definition 1.2.17). Then the attractor A has a finite fractal dimension dim f A in X. Moreover, we have the estimate
4LK 2 −1 · ln mZ dim f A ≤ ln , 1+q 1−q
14
1 Synchronization of Global Attractors and Individual Trajectories
where LK > 0 is the Lipschitz constant for K: K(v1 ) − K(v2 )Z ≤ LK v1 − v2 , v1 , v2 ∈ A, (for the definition of the mapping K see 1.2.17.) and mZ (R) is the maximal number of elements zi in the ball {z ∈ Z : zZ ≤ R} possessing the property nZ (zi − z j ) > 1 when i = j. In the case of second order in time models the following form of quasi-stability is useful (see Chueshov [40] and also Chueshov/Lasiecka [47, Section 7.9]). Let X and Y be reflexive Banach spaces, and X is compactly embedded in Y . We endow the space H = X ×Y with the norm y2H = u0 2X + u1 Y2 ,
y = (u0 , u1 ).
Assume that φ is a dynamical system of the form
φ (t, y) = (u(t), ut (t)),
y = (u0 , u1 ) ∈ H,
(1.2.9)
where the function u(t) possesses the property u ∈ C(R+ , X) ∩C1 (R+ ,Y ). A dynamical system φ on H given by (1.2.9) is said to be asymptotically quasistable on a set B ⊂ H if there exist a compact seminorm μX (·) on the space X and nonnegative scalar functions a(t), b(t), and c(t) on R+ such that (i) a(t) and c(t) are locally bounded on [0, ∞), (ii) b(t) ∈ L1 (R+ ) possesses the property limt→∞ b(t) = 0, and (iii) for every y1 , y2 ∈ B and t > 0 the following relations φ (t, y1 ) − φ (t, y2 )2H ≤ a(t) · y1 − y2 2H and
2 φ (t, y1 ) − φ (t, y2 )2H ≤ b(t) · y1 − y2 2H + c(t) · sup μX (u1 (s) − u2 (s)) 0≤s≤t
(1.2.10) hold. Here, we denote φ (t, yi ) = (ui (t), uti (t)), i = 1, 2. We have the following assertion (for the proof we refer the reader to Chueshov [40, Section 3.4.3]). Theorem 1.2.22 (Global Attractor). Assume that the system φ on H is dissipative and asymptotically quasi-stable on a bounded forward invariant absorbing set B in H. Then the system φ possesses a global attractor A of finite fractal dimension. Moreover, if we assume that (1.2.10) holds with the function c(t) ≥ 0 possessing the property c∞ = supt∈R+ c(t) < ∞, then any full trajectory {(u(t), ut (t)) : t ∈ R} that belongs to the global attractor enjoys the following regularity properties ut ∈ L∞ (R, X) ∩C(R,Y ),
utt ∈ L∞ (R,Y ).
1.2 Preliminaries on Global Attractors
15
There also exists R > 0 such that ut (t)2X + utt (t)Y2 ≤ R2 ,
t ∈ R,
where R depends on the constant c∞ , on the seminorm μX , and also on the embedding properties of X into Y . We refer the reader to Chueshov [40] and Chueshov/Lasiecka [46, 49] for other criteria of finite-dimensionality based on quasi-stability. We also refer to Babin/Vishik [7], Chepyzhov/Vishik [28],Temam [161] for an approach to dimension based on the volume contraction method, which can be applied in the case of C1 systems.
1.2.4 The Gronwall Lemma In the following we apply frequently the Gronwall lemma. We refer the reader to the version presented by Henry[92, Chapter 7] and Sell and You [149, p. 625]. Theorem 1.2.23. Suppose that b ≥ 0, β > 0 and suppose that a, w are locally integrable positive functions on [0, T ) or [0, ∞). We assume that w(t) ≤ a(t) + b Then w(t) ≤ a(t) + l
t 0
t 0
(t − r)β −1 w(r)dr.
Eβ (l (t − s))a(s)ds 1
for t ∈ [0, T ) or [0, ∞) and l = (bΓ (β )) β . Eβ being the derivative of the Mittag Leffler function Eβ . In particular, we have ∞
Eβ (z) =
znβ
∑ Γ (nβ + 1) ,
n=0
Eβ (z) ∼
1 z e, β
Eβ (z) ∼
1 z e β
for z → ∞.
Another version of the Gronwall lemma is given for β = 1. Theorem 1.2.24. Suppose that b ∈ L1 (0, T ), b(t) ≥ 0. If w is continuous and a is continuously differentiable so that w(t) ≤ a(t) +
t
b(r)w(r)dr 0
16
1 Synchronization of Global Attractors and Individual Trajectories
then we have t
w(t) ≤ e
0 b(r)
t t r t dr a(0) + a (r)e− 0 b(q)dq dr = a(t) + a(τ )b(τ )e τ b(q)dq d τ 0
0
where for the latter equality the differentiability of a is not necessary. Suppose that w(t) ≥ 0 and a ≡ 0, b = const then w ≡ 0 (see Wloka [167, Chapter 29]).
1.3 Coupled Parabolic Problems: Abstract Models One of the important classes of objects in synchronization theory is chemical oscillators, which can be described within the framework of the chemical kinetics (see, for example, Osipov/Kurths/Zhou [126]). Modern theoretical chemical kinetics is based on reaction–diffusion partial differential equation (PDE) systems, which are of parabolic type. In this section we concentrate on abstract models for these systems. Applications to different classes of parabolic PDEs are given later. We also mention Shen/Zhou/Han [150]. In the purely parabolic case there are two main approaches to synchronization presented in the literature. One of them (see, for example, Hale [86] for coupled ODE models on finite lattices) requires a large diffusive type coupling with an appropriate spectral gap near the bottom of the spectrum. The next one is based on construction of certain Lyapunov type functions (see Carvalho/Rodrigues/Dlotko [25] and Rodrigues [140]), which allows us to obtain appropriate bounds for the global attractors and apply results to their upper semicontinuity. In this chapter we concentrate mainly on the second approach. The idea of the former approach is partially included in Chap. 2 devoted to master–slave synchronization.
1.3.1 Model and Hypotheses To explain the main idea of our approach to synchronization in the parabolic case we consider the following system of differential equations Ut + A˜ 1U = F1 (U,V ), t > 0, in X1 , Vt + A˜ 2V = F2 (U,V ), t > 0, in X2 ,
(1.3.1)
and impose the following hypotheses. Assumption 1.3.1. Let X1 and X2 be two separable Hilbert spaces. We assume that 1. A˜ 1 and A˜ 2 are strongly positive linear operators in X1 and X2 with domains D(A˜ i ) ⊂ Xi . Strongly positive means that inf spec (A˜ i ) > 0. We define the spaces Xiσ , which are D(A˜ σi ) for σ ≥ 0 (the domain of A˜ σi ) and the completions of Xi
1.3 Coupled Parabolic Problems: Abstract Models
17
with respect to the norm A˜ σi ·Xi when σ < 0. Here and below, · Xi is the norm of Xi , and (·, ·)Xi is the corresponding scalar product. We also set · = · Xi and similar for the inner product. 2. F1 and F2 are nonlinear locally Lipschitz mappings, −β
F1 : X1α × X2α → X1 ,
−β
F2 : X1α × X2α → X2 ,
for some α , β ≥ 0 such that α + β < 1. This means that for every ρ > 0 there exist constants L1 (ρ ) and L2 (ρ ) such that 1/2 F1 (U1 ,V1 ) − F1 (U2 ,V2 )X −β ≤ L1 (ρ ) U1 −U2 2X α + V1 −V2 2X α , 1 2 1 1/2 F2 (U1 ,V1 ) − F2 (U2 ,V2 )X −β ≤ L2 (ρ ) U1 −U2 2X α + V1 −V2 2X α 1
2
2
(1.3.2) for all Ui 2X α + Vi 2X α ≤ ρ 2 , i = 1, 2. 1
2
The main examples (see Sect. 1.4 below) of the operators A˜ 1 and A˜ 2 are elliptic differential operators on domains (bounded or not) with self-adjoint boundary conditions. For instance, we can take A˜ i = −Δ on some domain Oi in Rdi with homogenous Dirichlet boundary conditions. The space Xi is then L2 (Oi ). The spaces Xiσ are subspaces of the Sobolev–Slobodeckij spaces H 2σ (Oi ) , σ ∈ R+ . We refer the reader to Chueshov [36, Section 2.1] and Chueshov [40, Section 4.1] for details on the spaces Xiσ in the case when A˜ i has a compact resolvent. The parameters α and β are responsible for regularity of nonlinear coupling terms. We take them the same for both F1 and F2 for the sake of simplicity. In principle, they can be different. We also note that the case when β is positive is motivated by the desire to include a boundary coupling into consideration (see Sect. 1.4.5). The restriction α + β < 1 means that the nonlinearities are subordinated to the main elliptic part. We can include the case α + β = 1 too. However, in this case we need either to impose the condition of the smallness of the Lipschitz constants in (1.3.2) or to assume some structure of the nonlinearities. This circumstance is considered in Sect. 2.4 for some class of thermoelastic models. Below, for every σ ∈ R, we consider the space X σ = X1σ × X2σ equipped with the norm 1/2 Y σ := Y X σ = U2X σ + V 2X σ , Y = (U,V ) ∈ X σ . 1
2
We denote by (·, ·)σ the corresponding inner product. We also occasionally write W σ = W Xiσ , W 0 = W when no confusions arise (the same as for the inner product).
18
1 Synchronization of Global Attractors and Individual Trajectories
With Assumption 1.3.1 we can rewrite system (1.3.1) as a single first-order equation in the space X = X1 × X2 : Yt + AY = F(Y ), t > 0, where Y (t) = (U(t),V (t)) and A˜ 1 0 A= , 0 A˜ 2
Y (0) = Y0 ∈ X,
F(Y ) =
(1.3.3)
F1 (U,V ) . F2 (U,V )
Obviously, the operator A is the generator of a linear C0 –semigroup S on X.5 Moreover, the operator A can be defined on each space X σ as a positive self-adjoint operator and as a continuous mapping from X σ into X σ −1 . The corresponding semigroup S is contractive and exponentially stable on each space X σ . It also possesses several regularity properties, for instance, one can see that for a sufficiently small δ > 0 Aσ S(t)Y ≤ cσ t −σ e−δ t Y for all t > 0, σ ≥ 0, Y ∈ X = X 0
(1.3.4)
and some positive δ . In the case when A˜ −1 i are compact operators all these properties can be found in Chueshov [36, Section 2.1] and Chueshov [40, Section 4.1]. In our (non-compact) case the argument is the same. Below we denote by C([a, b], X) the space of strongly continuous functions on [a, b] with values in X. Similarly, we define the space C([a, b), X). Definition 1.3.2 (Mild Solution). Suppose that Assumption 1.3.1 is in force. Assume that Y0 ∈ X α and T > 0. A function Y (t) = Y (t,Y0 ) from the space C([0, T ), X α ) is said to be a mild solution to problem (1.3.3) (or (1.3.1)) on the interval [0, T ) if Y (t) = E [Y ](t) := S(t)Y0 +
t 0
S(t − τ )F(Y (τ ))d τ in X −β
(1.3.5)
for all t ∈ [0, T ). Similarly, we can define a solution on the closed interval [0, T ]. We have the following result about existence and uniqueness of mild solutions to the problem in (1.3.3). Theorem 1.3.3 (Local Well-Posedness). Let Assumption 1.3.1 be in force. Then, for every Y0 ∈ X α and there exists T = T (Y0 ) > 0 such that the problem in (1.3.3) has a unique mild solution Y (t) on the interval [0, T ). Moreover, we have either T = ∞ or u(t)X α → ∞ as t T . For each t ∈ [0, T ) this solution depends continuously on initial data as a mapping from X α into itself. If the nonlinearities Fi are globally Lipschitz, i.e., the constants Li in (1.3.2) do not depend on ρ , then the solution exists globally.
5
Sometimes in other publications, the generator of this S is denoted by −A.
1.3 Coupled Parabolic Problems: Abstract Models
19
Proof. We apply the standard fixed point argument to prove the existence of a unique mild solution on small intervals. For details see Henry [92] and also Chueshov [36, 40]. The results in these sources are formulated for the case when β = 0. However, the case β > 0 can be easily reduced to the former one, if we take X β as a main (pivot) space (see, for example, Remark 4.2.5 in Chueshov [40]). In calculations with mild solutions it is important to know that they satisfy some variational type relations. Proposition 1.3.4. Let Assumption 1.3.1 be in force. Then, any mild solution Y (t) is also weak, i.e., it satisfies the relation (Y (t), w) = (Y0 , w) −
t 0
(Y (τ ), Aw) d τ +
t 0
(F(Y (τ )), w) d τ , w ∈ X 1−α , (1.3.6)
for t from every existence interval [0, T ]. Moreover, in the case when6 β ≤ 1/2 we have that • • • •
Y ∈ L2 (0, T, X 1/2 ), Y (t) is absolutely continuous in X −1/2 , Y (t) satisfies (1.3.3) as an equality in X −1/2 for almost t ∈ [0, T ], The following energy balance relation holds: 1 Y (t)2 + 2
t 0
1 Y (τ )21/2 d τ = Y0 2 + 2
t 0
(F(Y (τ )),Y (τ )) d τ .
(1.3.7)
Proof. Let PN be the spectral projector for A corresponding to the interval [0, N]. Then, AN = APN is a bounded positive operator and YN = PN Y (t) satisfies the integral equation t
YN (t) = SAN (t)PN Y0 +
0
SAN (t − τ ) fN (τ ) d τ
with fN (τ ) = PN F(Y (τ ))
in the space XN = PN X −β . In this space, AN generates a uniformly continuous semigroup (see, for example, Pazy [127]). Now the standard calculations with bounded operators after limit transition lead to the desired results. In particular, the approximations YN satisfy similar estimates as Y in (1.3.7). Thus, the problem in (1.3.1) defines a local dynamical system on X α by the formula
φ (t,Y0 ) = Y (t),
This restriction is motivated by our applications given below. The case β > 1/2 can also be considered at the abstract level.
6
20
1 Synchronization of Global Attractors and Individual Trajectories
where Y (t) solves (1.3.5). Using the mild solution relation in (1.3.5) and the smoothing relation in (1.3.4) one can see that for every γ ∈ [α , 1 − β ) there exists c1 > 0 and c2 (R) such that Y (t)γ ≤ c1t −γ +α Y (0)α + c2 (R), t ∈ (0, T ],
(1.3.8)
for every mild solution Y (t) possessing the property Y (t)α ≤ R for all t from the existence interval [0, T ]. This observation allows us to obtain the following assertion. Proposition 1.3.5 (Conditional Compactness). Assume that the problem in (1.3.1) satisfies Assumption 1.3.1 and generates a dynamical system φ on X α . Assume in addition that A−1 is a compact operator. Then, φ is conditionally compact (see Definition 1.2.4). More precisely, every bounded forward invariant set D contains a compact set KD , which is bounded in X γ for α < γ < 1 − β and absorbs D in the sense that φ (t, D) ⊂ KD for all t large enough. The proof of this proposition involves the smoothing relation in (1.3.8). The compactness of A−1 implies that X γ is compactly embedded in X α . For some details we refer the reader to Chueshov [40, Chapter 4].
1.3.2 Asymptotic Synchronization of a Fixed Trajectory To give a flavor of the theory we present a simple general result on synchronization of parabolic-type coupled systems. This result is conditional and assumes some properties of solutions, which we discuss in the subsequent considerations. Theorem 1.3.6. Let Assumption 1.3.1 be in force with X1 = X2 , A˜ 1 = A˜ 2 = A˜ and β ≤ 1/2. Moreover, we assume that the nonlinearities have the following structure ˆ F1 (U,V ) =F(U,V ) + F0 (U) + G1 (U −V ) ˆ F2 (U,V ) =F(U,V ) + F0 (V ) − G2 (U −V ), where Fˆ satisfies the Assumption 1.3.1(2) given for F1 and F2 there. For every R > 0 there exist constants CF0 (R),CG (R) ∈ R and μF0 , μG ≥ 0 such that (F0 (U) − F0 (V ),U −V ) ≤ CF0 (R)U −V 2 + μF0 U −V 21/2 1/2
for every U,V ∈ X1
1/2
= X2
such that Uα , V α ≤ R; and
(G1 (W ),W ) + (G2 (W ),W ) ≤ CG (R)W 2 + μG W 21/2 1/2
for every W ∈ X1
such that W α ≤ R.
1.3 Coupled Parabolic Problems: Abstract Models
21
Let Y (t) = (U(t),V (t)) be a mild solution to (1.3.3), which is bounded in X α on R+ . Assume that there is a R > 0 such that R > lim sup {U(t)α + V (t)α } ,
(1.3.9)
t→+∞
Then, under the conditions
μF0 + μG ≤ 1, (1 − μF0 − μG ) inf spec A −CF0 (R) −CG (R) > 0,
(1.3.10)
the couple (U,V ) is exponentially synchronized, i.e., there exists γ > 0 such that (1.3.11) lim eγ t U(t) −V (t)α = 0. t→+∞
In the case when CF0 (R) ≡ CF0 and CG (R) ≡ CG do not depend on R under condition (1.3.10) we have synchronization of any solution Y = (U,V ), but in a weaker form (with α = 0 in (1.3.11)). Proof. The difference W = U −V solves the equation Wt + AW = M(t) with M = F0 (U) − F0 (V ) + G1 (W ) + G2 (W ). It follows from (1.3.9) that there exists t∗ > 0 such that U(t)α + V (t)α ≤ R for all t ≥ t∗ . Therefore, under the conditions above we have that
(M(t),W ) ≤ CF0 (R) +CG (R) W 2 + (μF0 + μG )W 21/2 for t ≥ t∗ . Thus, d W 2 + 2γ∗ W 2 ≤ 0 for all t ≥ t∗ dt with γ∗ = (1 − μF0 − μG ) inf spec A −CF0 (R) −CG (R). This implies that W (t) ≤ W (t∗ )e−γ∗ (t−t∗ ) for all t ≥ t∗ . Now, using the mild form in (1.3.5), we can show that Y (t) = (U(t),V (t)) is bounded in X α +ε for some ε > 0. Thus, using interpolation we can obtain (1.3.11). Since U(t)α , V (t)α are bounded w.r.t. t ≥ 0 so is F1 (U(t),V (t))−β , F2 (U(t),V (t))−β . Let us denote this bound by K. Consider ε > 0 such that α + ε + β < 1. Hence, S(t)(U(0),V (0))α +ε
22
1 Synchronization of Global Attractors and Individual Trajectories
is uniformly bounded for t ≥ t0 > 0. In addition, t t S(t − r)F(U(r),V (r))dr ≤ 2Kc (t − r)−(α +ε +β ) e−δ (t−r) dr, α + ε + β 0
α +ε
0
which is also uniformly bounded in t ≥ 0. Hence, W (t)α +ε is uniformly bounded for t > t0 . Now we obtain the exponential convergence of W (t)α by an interpolation argument θ θ W (t)α ≤ CW (t)1− 0 W (t)α +ε for θ = α /(α + ε ). As a finite-dimensional motivation for Theorem 1.3.6, we consider the following model. Example 1.3.7 (Coupled Active Rotators). Let y = (u, v) solve the following ODE system u˙ + λ u = ω + a sin u + g1 sin(u − v), t > 0, v˙ + λ v = ω + a sin v + g2 sin(v − u), t > 0,
(1.3.12)
where λ > 0, ω ∈ R, a ≥ 0, gi ≥ 0 are parameters. We can apply Theorem 1.3.6 with CF0 (R) = CG (R) = 0, μF0 = aλ −1 , μG = [g1 + g2 ]λ −1 . Thus, we observe asymptotic synchronization in (1.3.12) under the condition [a + g1 + g2 ]λ −1 < 1. For a PDE version of this example we refer the reader to Sect. 1.4. The idea presented in Theorem 1.3.6 admits other realizations. The following remark shows this. Remark 1.3.8. There is an important case when the structure of F0 and Gi guarantees the result on synchronization. Assume that F0 and Gi are linear, i.e., F0 (U) = f0 (U) − K0U, G1 (W ) = −K1W, G2 (W ) = −K2W, where f0 : X1 = X2 → X1 is a globally Lipschitz mapping, f0 (U1 ) − f0 (U2 ) ≤ L f0 U1 −U2 , U1 ,U2 ∈ X1 , and K j are linear self-adjoint operators7 such that ((K0 + K1 + K2 )w, w) ≥ ηK w2 . In this case, instead of the condition in (1.3.10), we can assume that inf spec A˜ + ηK > L f0 .
7
Some of these operators can be zero or even negative.
(1.3.13)
1.3 Coupled Parabolic Problems: Abstract Models
23
Indeed, in this case W (t) = U(t) −V (t) satisfies the equation ˜ + KW = ( f0 (U) − f0 (V ),W ) t > 0, Wt + AW where K = K0 + K1 + K2 . This implies that 1 d W 2 + A˜ 1/2W 2 + K 1/2W 2 . ≤ L f0 W 2 2 dt Therefore, d W 2 + 2(λ1 + ηK − L f0 )W 2 ≤ 0, dt ˜ This implies that U(t) −V (t) → 0 with exponential speed where λ1 = inf spec A. as t → ∞. The observation made in this remark is motivated by the following coupled rotators model u˙ + λ u = ω + a sin u − k1 (u − v), t > 0, v˙ + λ v = ω + a sin v − k2 (v − u), t > 0,
(1.3.14)
where λ > 0, ω ∈ R, a ≥ 0, ki ∈ R are parameters. The condition in (1.3.13) for this model has the form k1 + k2 + λ > a.
1.3.3 Linear Coupling: Well-Posedness and Global Attractors In this section we assume that interacting subsystems are linearly coupled as in (1.3.14). More precisely, our main object is system (1.3.1) under the additional hypotheses listed below. Assumption 1.3.9 (Linear Coupling). With the reference to the model in (1.3.1), assume that 1. A˜ 1 and A˜ 2 are positive linear operators in separable Hilbert spaces X1 and X2 respectively and possess a discrete spectrum. This means that for every i = 1, 2, there exists a complete orthonormal basis {eik } of Xi such that A˜ i eik = λki eik ,
with 0 < λ1i ≤ λ2i ≤ . . . ,
lim λki = ∞.
k→∞
For an overview of the theory of these operators we refer the reader to Chueshov [36, Section 2.1] and Chueshov [40, Section 4.1]. As above we denote by {Xiσ }σ ∈R the scale of the spaces constructed from the powers of A˜ i . 2. F1 and F2 are nonlinear locally Lipschitz mappings, 1/2
F1 : X1
1/2
× X2
−β
→ X1 ,
1/2
F2 : X1
1/2
× X2
−β
→ X2 , 0 < β < 1/2,
24
1 Synchronization of Global Attractors and Individual Trajectories
of the form F1 (U,V ) = − (K11U + K12V ) + F˜1 (U) F2 (U,V ) = − (K21U + K22V ) + F˜2 (V ),
(1.3.15)
−β
1/2
where Ki j are linear bounded operators from X j into Xi . Moreover, we assume that the coupling matrix K admits the representation K11 K12 K := (1.3.16) = K+ + K∗ , K21 K22 where K+ generates a symmetric nonnegative bilinear form k+ (Y, Y˜ ) = 1/2 1/2 (K+Y, Y˜ ) on X 1/2 = X1 ×X2 and K∗ ; X 1/2 → X is a bounded linear operator, K+Y ≤ CY 1/2 , Y ∈ X 1/2 . 1/2 Concerning the nonlinear terms in (1.3.15) we assume that F˜i : Xi → Xi are locally Lipschitz, i.e., for every ρ > 0 there exists a constant L(ρ ) such that
F˜i (u) − F˜i (v) ≤ L(ρ )u − v1/2 , i = 1, 2, 1/2
for all u, v ∈ Xi
(1.3.17)
such that u1/2 , v1/2 ≤ ρ . Assume in addition that F˜i (u) = −Πi (u) + Fˇi (u),
1/2 where Fˇi : Xi → Xi is continuous and linearly bounded, i.e., 1/2
Fˇi (u) ≤ c1 + c2 u1/2 , u ∈ Xi and Πi : Xi
1/2
, 1/2
→ Xi is a Frech´et derivative of a functional Πi (u) on Xi
1 Πi (u + v) − Πi (u) − (Πi (u), v) = 0. v1/2 →0 v1/2
(1.3.18)
lim
1/2
We note that under the conditions above for every u ∈ C1 ([a, b], Xi function t → Πi (u(t)) is a C1 function on [a, b] and d Πi (u(t)) = (Πi (u(t)), ut (t)), t ∈ [a, b]. dt Applying this relation
Πi (u + v) − Πi (u) =
1 0
(Πi (u + λ v), v)d λ .
, i.e.
) the scalar
(1.3.19)
1.3 Coupled Parabolic Problems: Abstract Models
25
we obtain sup |Πi (v)| : v1/2 ≤ R < ∞ for every R > 0. Thus, under the hypotheses above we arrive at the following model Ut +A˜ 1U + K11U + K12V = F˜1 (U), t > 0, in X1 , Vt +A˜ 2V + K21U + K22V = F˜2 (V ), t > 0, in X2 .
(1.3.20)
As in (1.3.3), these equations can be written in the matrix form: ˜ ), t > 0, Yt + AY + K Y = F(Y where Y (t) = (U(t),V (t)) and A˜ 1 0 K11 K12 A= , K = , K21 K22 0 A˜ 2
Y (0) = Y0 ∈ X, ˜ )= F(Y
F˜1 (U) . F˜2 (V )
Remark 1.3.10. The simplest example of the equations in (1.3.20) is given by (1.3.14). In this case, Fˇi ≡ 0 and F˜i (u) = a sin u = −Π (u) with Π (u) = c + a cos u, where c is an arbitrary constant. However, our main motivation for this structure of the model is related to reaction–diffusion PDE models (see Sect. 1.4 below). The main case that we keep in mind is that when (i) X1 = X2 , (ii) the operators A˜ i are proportional (i.e., A˜ 2 = ν A˜ 1 for some ν > 0), and (iii) the coupling has one of the following forms • coupling via differences, K12 = −K11 ≡ −K1 , K21 = −K22 ≡ −K2 ; • symmetric coupling via differences, K11 = K22 ≡ K, K12 = K21 ≡ −K; • skew coupling, K11 = K22 ≡ 0 with two important subcases: K21 = K12 (symmetric coupling) and K21 = −K12 (skew-symmetric coupling). Theorem 1.3.11 (Well-Posedness). Let Assumption 1.3.9 be in force. Assume in addition that the functional Πi (u) possesses the property: there exist 0 < μ < 1/2 and γ ≥ 0 such that 1/2
1/2
μ A˜ i u2 + Πi (u) + γ ≥ 0, ∀ u ∈ Xi
, i = 1, 2.
(1.3.21)
Then, for every Y0 ∈ X 1/2 , the problem in (1.3.20) has a unique mild solution Y (t) = (U(t),V (t)) in the space C(R+ , X 1/2 ). Moreover, there exists the time derivative Yt which belongs to L2 (0, T, X) for every T > 0 and the following energy–balance relation t
E(Y (t)) + 0
Yt (τ )2X d τ − = E(Y0 ) +
t
t 0
0
(K∗Y (τ ),Yt (τ ))d τ
(F˜1 (U(τ )),Ut (τ )) + F˜2 (V (τ )),Vt (τ )) d τ
(1.3.22)
26
1 Synchronization of Global Attractors and Individual Trajectories
holds for every t > 0. Here, for Y = (U,V ) ∈ X 1/2 , we denote E(Y ) =
1 ˜ 1/2 2 1/2 A1 U + A˜ 2 V 2 + (K+Y,Y ) + Π1 (U) + Π2 (V ). 2
(1.3.23)
The standard ODE example, which illustrates Theorem 1.3.11, is the following system 2
u˙i + λi ui + ∑ ki j u j = f˜i (ui ), i = 1, 2,
(1.3.24)
j=1
where λi > 0, {ki j } is a real matrix, for f˜i ∈ C1 (R) there exist c1 , c2 > 0 : −
x 0
f˜i (ξ )d ξ ≥ −c1 x2 − c2 , for all x ∈ R.
(1.3.25)
In this ODE case, it is not necessary to separate the non-negative part of the coupling matrix. The coercivity condition in (1.3.21) can be achieved if we redefine f˜i and ki j by the formulas f˜i (s) − 2c1 s, ki j − 2c1 δi j . The proof of Theorem 1.3.11 is standard. We give a sketch only. Proof. Sketch of the proof of Theorem 1.3.11. We consider the problem in (1.3.20) and its approximation with operators KiNj = PNi Ki j PNj instead of Ki j , where PNi is the orthoprojector on the eigenspace Span {ei1 , . . . , eiN }. By Theorem 1.3.3 both problems have local solutions. Then using Proposition 1.3.4 applied to the approximate problem with help of the multiplier Yt and representation (1.3.19) we obtain the balance relation in (1.3.22) for approximate (local) solutions Yt (t)2 + (AY (t),Yt (t)) + (K+Y (t),Yt (t)) + (K∗Y (t),Yt (t)) d d + Π1 (U(t)) + Π2 (V (t)) = (Fˇ1 (U(t),Ut (t)) + (Fˇ2 (V (t),Vt (t)) dt dt where (AY (t),Yt (t)) =
1 1 d ˜ 12 (A1 U(t)2 + A˜ 22 V (t)2 ). 2 dt
Then, the coercivity condition in (1.3.21) allows us to show that approximate solutions are defined on the whole time semi-axis R+ . Thus, using the same idea as in the proof of Theorem 4.2.22 in Chueshov [40] we can conclude the proof of Theorem 1.3.11. Theorem 1.3.11 allows us to define the dynamical system φ generated by (1.3.20) according to the formula
φ (t,Y0 ) := Y (t) = (U(t),V (t)). where Y = (U,V ) solves (1.3.20) and satisfies the initial data Y (0) = Y0 = (U0 ,V0 ).
1.3 Coupled Parabolic Problems: Abstract Models
27
The following theorem gives some conditions under which equation (1.3.1) generates a dissipative dynamical system that possesses a global attractor. Theorem 1.3.12 (Global Attractor). Let Assumption 1.3.9 be in force. Assume in addition that (i) There exist μ < 1/2 and γ ≥ 0 such that (1.3.21)) holds. (ii) There exist μ¯ < 1, δ > 0 and η > 0 such that8 − (Πi (u), u) ≤ μ¯ A˜ i u2 − δ Πi (u) + η , ∀ u ∈ Xi 1/2
1/2
;
(1.3.26)
(iii) For every σ > 0 we can find Cσ such that9 1/2
u2 ≤ Cσ + σ Πi (u) for all u ∈ Xi
;
(1.3.27)
(iv) The nonsymmetric part K∗ of the coupling matrix K admits the estimate ∃ α < 1/2, k∗ > 0 : K∗Y ≤ k∗ Y α , ∀Y ∈ X 1/2 .
(1.3.28)
Then, the problem in (1.3.20) generates a compact dynamical system φ in X 1/2 with absorbing set B0 , which belongs to the space X γ with γ ∈ (1/2, 1 − β ) and thus φ possesses a global attractor A. This attractor has a finite fractal dimension. Moreover, 1/2 1/2 A ⊂ B0 ⊂ {Y = (U,V ) : A˜ 1 U2 + A˜ 2 V 2 + (K+Y,Y ) ≤ R2 }
(1.3.29)
for some R independent of K+ , but depends on the constant k∗ in (1.3.28). We also have the relation
t+1 1/2 1/2 2 2 2 2 ˜ ˜ Ut (τ ) + Vt (τ ) d τ ≤ CR sup A1 U(t) + A2 V (t) + t∈R
t
(1.3.30) for every complete trajectory Y (t) = (U(t),V (t)) from the attractor A. Proof. To show the existence of a global attractor we use dissipativity of φ and apply Theorem 1.2.6 and Proposition 1.3.5. To prove the statement concerning dissipativity we apply the standard multipliers method (see, for example, Chueshov [40, Chapter 4]). Our calculations below are formal. We can do them rigorously by considering approximate solutions in the same way as was done in Chueshov [40].
8
The relation in (1.3.26) is the standard requirement in many semilinear models, see, for example, Temam [161] or Chueshov/Lasiecka [46]. We discuss conditions (1.3.21) and (1.3.26) with details in Sect. 1.4.1 below. 9 This condition assumes some super-quadratic growth of the potential Π and can be omitted in ˇ the case when F(u) ≡ 0 and the coupling is symmetric and nonnegative, i.e., K = K+ in (1.3.16).
28
1 Synchronization of Global Attractors and Individual Trajectories
We have from (1.3.7) that 1d Y (t)2 + (AY,Y ) + (Π1 (U),U) + (Π2 (V ),V ) + (K Y,Y ) 2 dt = (Fˇ1 (U),U) + (Fˇ2 (V ),V ). Thus, using the splitting K = K+ + K∗ with K+ ≥ 0 we have d 1 Y (t)2 + (1 − μ¯ )Y 21/2 + δ [Π1 (U) + Π2 (V )] + (K+Y,Y ) dt 2 ≤c0 + c1 Y 2 where c0 , c1 are positive constants depending on Fˇi and K∗ . In particular, we have by interpolation and the Young inequality for every ε > 0 a Cε > 0 |(K∗Y,Y )| ≤ k∗ Y α Y ≤ ε Y 21/2 +Cε Y 2 where Cε depends on α . It follows similarly from (1.3.22) that d E(Y (t)) + Yt (t)2 = − (K∗Y (t),Yt (t)) dt + (Fˇ1 (U(t)),Ut (t)) + Fˇ2 (V (t)),Vt (t)) 1 ≤c¯0 + c¯1 Y 2 + c¯2 Y 21/2 + Yt 2 4 for the appropriate chosen constants c¯0 , c¯1 , c¯2 . Therefore, for Ψ (Y ) = ζ Y 2 +E(Y ) with an ζ > 0 with ζ (1 − μ¯ )/2 − c¯2 > 0 we have 1 − μ¯ 1 d 2 − c¯2 Y 21/2 Ψ (Y ) + Yt + ζ dt 4 2 + ζ δ [Π1 (U) + Π2 (V )] + ζ (K+Y,Y ) ≤ c1 (ζ ) +C(ζ )Y 2 . Now, by (1.3.27) we can find a constant C such that ζδ Π1 (U) + Π2 (V ) ≥ C + Y 2 2C(ζ ) Hence, we obtain d Ψ (Y ) + γ0Ψ (Y ) + γ1 Yt 2 ≤ γ2 , dt where γ0 , γ1 > 0 are independent of K and γ2 depends on k∗ from (1.3.28) only. By standard methods we obtain a bounded absorbing set B0 ⊂ X 1/2 . The smoothing X 1/2
property of φ similar to the proof of Theorem 1.3.6 shows that φ (1, B0 ) is a compact absorbing set, since X 1/2 is compactly embedded in X γ for γ > 1/2. This implies the existence of a global attractor with the estimates in (1.3.29) and (1.3.30).
1.3 Coupled Parabolic Problems: Abstract Models
29
To prove the finite-dimensionality of the attractor we use again the smoothing property in (1.3.4) and the mild form of the problem in (1.3.5), which imply that φ (t,Y1 ) − φ (t,Y2 )γ ≤ CR Y1 −Y2 1/2 with 1/2 < γ < 1 − β for every Yi ∈ X1/2 such that supt≥0 φ (t,Yi )1/2 ≤ R. We can apply Proposition 1.2.18 on every bounded forward invariant absorbing set. For more details on general parabolic problems we refer the reader to Chueshov [40, Corollary 3.1.26]. Remark 1.3.13. The results concerning well-posedness and long-time dynamics for the model in (1.3.20) presented above can also be applied in the case when the coupling is absent (Ki j ≡ 0). Thus, in the case of a single equation of the form Vt + A˜ iV = F˜i (V ), t > 0, V (0) = V0 , in Xi ,
(1.3.31)
we can refer to the theorems above. The first step in the study of synchronization phenomena is the following assertion on the semicontinuity of the global attractor with respect to interaction operators. Proposition 1.3.14 (Upper Semicontinuity). With reference to the system in (1.3.20), assume that the interaction operator K = {Ki j }2i, j=1 depends on a parameter κ, which belongs to some metric space Λ . Let Assumption 1.3.9 be in force for every κ ∈ Λ with β independent of κ. Then, under the hypotheses of Theorem 1.3.12, for every κ ∈ Λ there exists a global attractor Aκ , which is upper semicontinuous in the sense that dX (Aκ , Aκ0 ) → 0 as κ → κ0 ,
(1.3.32)
provided Ki j (κ) → Ki j (κ0 ) as κ → κ0 weakly in the sense that lim (Ki j (κ)ψ , φ ) = (Ki j (κ0 )ψ , φ ), ∀ ψ ∈ X 1/2 , φ ∈ X β .
κ→κ0
(1.3.33)
Proof. The property in (1.3.33) via the uniform boundedness principle implies that the norms Ki j (κ) 1/2 −β are uniformly bounded in κ from some neighborhood L(X j
,Xi
)
O(κ0 ) of κ0 and similar for the constants k∗ (κ) defined in (1.3.28). Thus, applying Theorem 1.3.12, we obtain the existence of an attractor Aκ , which is uniformly 1/2 1/2 bounded in X 1/2 = X1 × X2 , κ ∈ O(κ0 ). Using the same argument as in the proof of Proposition 1.3.5 we can show that all attractors Aκ , κ ∈ O(κ0 ), belong to some bounded set in X 1/2+ε that is compactly embedded in X 1/2 . Therefore, to apply Theorem 1.2.14, we need only to check the condition in (1.2.5). This can be done via limit transition in variational relation (1.3.6) for the model considered in (1.3.20).
30
1 Synchronization of Global Attractors and Individual Trajectories
1.3.4 Linear Coupling: Complete Replacement Synchronization With reference to the problem in (1.3.20), we recall the notion of complete replacement synchronization suggested in Pecora/Carroll [128] (see the discussion in the Introduction, and also Chow/Liu [30], Pecora et al. [129], Tresser/Worfolk/Bass [163] and the references therein). Definition 1.3.15 (Complete Replacement Synchronization). Let a vector function Y (t) = (U(t),V (t)) be a solution to (1.3.20) with some initial data Y0 = (U0 ,V0 ), i.e., the relations Ut + A˜ 1U + K11U + K12V = F˜1 (U) in X1 for t > 0, U(0) = U0 , Vt + A˜ 2V + K21U + K22V = F˜2 (V ) in X2 for t > 0, V (0) = V0 ,
(1.3.34)
hold. We define the so-called (Y0 ,U)-subordinate (nonautonomous) system ¯ in X1 for t > 0, U(0) ¯ U¯t + A˜ 1U¯ + K11U¯ + K12V (t) = F˜1 (U) = U¯ 0 ,
(1.3.35)
with some initial data U¯ 0 , where V (t) is the second component of the reference solution Y (t). According to Pecora/Carroll [128] system (1.3.34) demonstrates the complete replacement (or drive-response) synchronization if for every initial data 1/2 1/2 1/2 Y0 = (U0 ,V0 ) ∈ X1 × X2 and U¯ 0 ∈ X1 ¯ lim U(t) − U(t)) 1/2 = 0.
t→+∞
The variable V is called the synchronizing coordinate, (1.3.34) is the driving system and (1.3.35) is said to be the response system. Theorem 1.3.16. Consider the system (1.3.34). In addition to the hypotheses to Theorem 1.3.12 we assume concerning the coupling matrix K the following properties: where K generates a symmetric nonnegative bilinear form on X 1/2 (i) K11 = κ K, 1 and κ is a nonnegative parameter; η (ii) The operator Ki j for (i, j) ∈ {(1, 2), (2, 1), (2, 2)} is bounded from X j into Xi for some η < 1/2. Then, the two problems in (1.3.34) and (1.3.35) have unique mild solutions and there exists a constant s∗ > 0 independent of κ such that under the condition 1/2 sκ := inf (A˜ 1U,U) + κ(KU,U) : U ∈ X1 , U = 1 ≥ s∗ ˜ 2 , where L(R) ˜ is the where (·, ·) is the inner product in X1 . Suppose that sκ > 6L(R) ˜ ˜ local Lipschitz constant for F1 (see 1.3.17) and R is a number determined below. Then we have complete replacement synchronization in (1.3.34) with exponential
1.3 Coupled Parabolic Problems: Abstract Models
31
speed and with V (t) as a synchronizing coordinate, i.e., in particular, we have ¯ lim eγ t U(t) − U(t) (1.3.36) 1/2 = 0 for some γ > 0. t→∞
Proof. We apply Theorem 1.3.12 with 0 0 K12 K K+ = κ , K∗ = . K21 K22 0 0 This gives us the existence of an absorbing ball of the form B0 = Y = (U,V ) : Y 1/2 ≤ R where R does not depend on κ. Thus, for every solution Y = (U,V ) to (1.3.34) there exists t∗ > 0 such that U(t)21/2 + V (t)21/2 ≤ R2 for all t ≥ t∗ . Now we consider the response system (1.3.35). Similar to Theorem 1.3.11 one can show that this problem has a unique mild solution that satisfies the corresponding energy relation. Since K12V (t) ≤ CR for all t ≥ t∗ , using almost the same calculations as in the proof of Theorem 1.3.12 we can conclude that there exists R1 independent of κ and time t∗∗ = t∗∗ (κ, R) such that ¯ U(t) 1/2 ≤ R1 for all t ≥ t∗∗ . Indeed, we have for every ε > 0 ¯ ¯ (K12V (t), U(t)) ≤ ε U(t) 1/2 + cε ,
(K12V (t), U¯ t (t)) ≤ ε U¯ t (t)1/2 + cε .
Thus, we can assume that there exist R˜ (independent of κ) and a moment tˆ such that 2 2 ¯ ˜2 ˆ U(t)21/2 + U(t) 1/2 + V (t)1/2 ≤ R for all t ≥ t .
¯ These observations allow us to estimate the difference Z(t) = U(t) − U(t) on the interval [tˆ, +∞). This difference satisfies the equation = F˜1 (U) − F˜1 (U) ¯ in X1 for t > tˆ, Zt + A˜ 1 Z + κ KZ The multiplier Z gives us ˜ 2 1 d 1/2 1/2 Z2 ≤L(R) ˜ A˜ 1/2 ZZ≤ 1 A˜ 1/2 Z2 + L(R) Z2 Z2 +A˜ 1 Z2 +κK 1 1 2 dt 2 2 for t ≥ tˆ. Using the multiplier Zt we obtain ˜ 2 1 d ˜ 1/2 2 1/2 Z2 + Zt 2 ≤ L(R) A˜ 1/2 Z2 + Zt 2 . A1 Z + κK 1 2 dt 2
32
1 Synchronization of Global Attractors and Individual Trajectories
Consequently, the function 1/2 1/2 Z(t)2 Φ (t) = σ Z(t)2 + A˜ 1 Z(t)2 + κK
for every σ > 0 satisfies the relation d 1/2 Z(t)2 ≤ σ L(R) ˜ 2 ))A˜ 1/2 Z(t)2 + σ κK ˜ 2 Z(t)2 Φ (t) + (σ − L(R) 1 dt ˜ 2 ≥ σ /2. Then, for t ≥ tˆ. Choose σ so that σ − L(R) d σ σ ˜ 1/2 1/2 Z(t)2 − [σ + 4L(R) ˜ 2 ]Z(t)2 ≤ 0 Φ (t) + Φ (t) + A1 Z(t)2 + κK dt 4 4 ˜ 2 we obtain for t ≥ tˆ. Thus, for σ = 2L(R) d σ σ ˜ 2 Z(t)2 ≤ 0 sκ − 6L(R) Φ (t) + Φ (t) + dt 4 4 ˜ 2 , this implies the conclusion in (1.3.36) for all t ≥ tˆ. Under the condition sκ ≥ 6L(R) and completes the proof of the theorem.
1.3.5 Linear Coupling: Synchronization of Global Attractors In this section we restrict ourselves by symmetric coupling via differences in the case when X1 = X2 . Thus, we arrive at the following model Ut + A˜ 1U + K(U −V ) = F˜1 (U), t > 0, in X1 , Vt + A˜ 2V + K(V −U) = F˜2 (V ), t > 0, in X1 ,
(1.3.37)
We assume that A˜ i , F˜i satisfy Assumption 1.3.9 and K is a linear operator defined 1/2 on X1 of the form K = κK0 , where κ is a nonnegative parameter that describes the intensity of interaction of two subsystems. The case κ = 0 corresponds to the uncoupled situation. We assume that the hypotheses of Theorem 1.3.12 concerning A˜ i and F˜i hold. In this case under appropriate conditions concerning the coupling operator K0 Theorem 1.3.12 implies the existence of the global attractor Aκ , which in the case κ = 0 has the form A0 = A 1 × A 2 , where A1 and A2 are global attractors of the corresponding (uncoupled) subsystems. The main question is to find how the attractor Aκ depends on κ and whether a regime synchronized at the level of attractors is possible. As was already mentioned
1.3 Coupled Parabolic Problems: Abstract Models
33
at the beginning of the chapter, this synchronization means that in some sense Aκ has a “diagonal” structure for some values of the coupling parameter κ. We start with the simplest case of coupling of two identical subsystems and prove the following version of Theorem 1.3.6 concerning synchronization. ˜ F˜1 (u) = Theorem 1.3.17. With reference to (1.3.37) we assume that A˜ 1 = A˜ 2 = A, 1/2 1/2 −β ˜ F2 (u), K = κK0 , where K0 : X1 → X1 generates a symmetric form on X1 such 2 that (K0 u, u) ≥ η0 u for some η0 > 0. Let hypotheses of Theorem 1.3.12 concerning A˜ and F˜ be in force. Then the system φ κ on X 1/2 generated by (1.3.37) possesses a global attractor Aκ for each κ ≥ 0. Moreover, there exists a level of the intensity parameter κ∗ > 0 such that for every κ ≥ κ∗ we have that lim eγ t U(t) −V (t)1/2 = 0 for some γ = γ (κ) > 0 (1.3.38) t→∞
for every solution Y (t) = (U(t),V (t)) to the problem in (1.3.37). Thus, we observe asymptotic synchronization of each trajectory with exponential speed. For κ ≥ κ∗ we also have that Aκ = {(U,U) : U ∈ A}, where A is the global attractor of the 1/2 system generated in X1 by the problem 1/2
˜ = F˜1 (V ), t > 0, V0 ∈ X . Vt + AV 1 Thus, we have synchronization at the level of global attractors. Proof. We first note that the existence of a finite-dimensional global attractor follows from Theorem 1.3.12. To prove (1.3.38) we use the same argument as in Theorem 1.3.6. For W (t) = U(t) −V (t) we have that ˜ + 2KW = F˜1 (U) − F˜1 (V ), t > 0, in X1 . Wt + AW 1/2
By Theorem 1.3.12 there is an absorbing ball BR in X 1/2 = X1 radius R independent of κ > 0. This implies that
1/2
× X1
with the
1d 1 W 2 + A˜ 1/2W 2 + 2K 1/2W 2 ≤ CR A˜ 1/2W W ≤ A˜ 1/2W 2 + cR W 2 2 dt 2 for all t ≥ t∗ with some t∗ = t∗ (R) ≥ 0. Therefore, d 1/2 W 2 + λ1 W 2 + 4κK0 W 2 ≤ 2cR W 2 . dt This implies the existence of γ∗ > 0 such that U(t) −V (t)2 ≤ U(t∗ ) −V (t∗ )2 e−2γ∗ (t−t∗ ) under the condition λ1 + 4κ 2 η0 > 2cR . Therefore, using interpolation as in Theorem 1.3.6 we obtain (1.3.38).
34
1 Synchronization of Global Attractors and Individual Trajectories
Now we prove the diagonal structure of Aκ . Let {Y (t) = (U(t),V (t)) : t ∈ R} be a full trajectory from the attractor Aκ with κ ≥ κ∗ . In this case, the same argument as above gives that U(t) −V (t)2 ≤ U(s) −V (s)2 e−2γ∗ (t−s) ≤ CAκ e−2γ∗ (t−s) for all t > s, t, s ∈ R. Thus, in the limit s → −∞ we obtain that U(t) = V (t) for all t ∈ R. Substituting U = V in (1.3.37), we obtain the desired structure of the attractor. Remark 1.3.18. In the result above in Theorem 1.3.17 it is not important that the operator K0 is strictly positive. Instead, we can assume that the parameter 1/2 ˜ sκ = inf (AW,W ) + 2κ(K0W,W ) : W ∈ X1 , W = 1 is large enough for some range of κ. Moreover, it is not important that the coupling is symmetric.10 We can consider the system of the form ˜ + K1 (U −V ) = F˜1 (U), t > 0, in X1 , Ut + AU ˜ + K2 (V −U) = F˜1 (V ), t > 0, in X1 , Vt + AV
(1.3.39)
and assume that K1 and K2 are self-adjoint operators (it is even allowed that one of these operators is zero or negative) such that (A˜ 1 u, u) + ((K1 + K2 )u, u) ≥ η u2
(1.3.40)
with the parameter η > 0 large enough. To see this we can apply the argument given in Remark 1.3.8. Moreover, it is convenient to introduce new variables W = U −V and V . In this case, the problem in (1.3.39) has the form ˜ + (K1 + K2 )W = F˜1 (W +V ) − F˜1 (V ), t > 0, in X1 , Wt + AW ˜ = K2W + F˜1 (V ), t > 0, in X1 . Vt + AV
(1.3.41)
This structure allows us to easily analyze the situation at the level of attractors under the following one-sided monotonicity-type condition (F˜1 (W +V ) − F˜1 (V ),W ) ≤ aW 21/2 + bW 2
(1.3.42)
with a < 1 and b ∈ R. The argument is the same as in Remark 1.3.8. As an ordinary differential equation illustration of this effect we can consider the system (1.3.43a) Ut + λ U + k1 (U −V ) = f˜1 (U), t > 0, in R, Vt + λ V + k2 (V −U) = f˜1 (V ), t > 0, in R,
10
A similar observation has already been made in Remark 1.3.8.
(1.3.43b)
1.3 Coupled Parabolic Problems: Abstract Models
35
where λ > 0, ki ∈ R, fˆ1 ∈ C1 (R) satisfies the condition in (1.3.25) and possesses ˜ = λ U where X1 = R and the property fˆ1 (s) < λ + k1 + k2 for all s ∈ R. Here, AU 2 2 U1/2 = λ U . We also note that if instead of (1.3.40) we assume that 1/2 s1 = inf (A˜ 1 w, w) + (K1 w, w) : w ∈ X1 , w = 1 is large enough, then as in Theorem 1.3.16 we can observe the complete replacement synchronization with V as a synchronizing variable under the condition in (1.3.42). Similarly, in the same framework, if 1/2 s2 = inf (A˜ 1 w, w) + (K2 w, w) : w ∈ X1 , w = 1 is sufficiently large, then we can see the complete replacement synchronization with U as a synchronizing variable. It is also interesting to mention that if both s1 and s2 are large enough (e.g., K1 = K2 = κK with κ large and K strictly positive), then we can take as a synchronizing variable either V or U. Now we waive the condition that the subsystems in (1.3.37) are identical. We then consider the case of different F˜i and assume that ˜ (i) A˜ i are proportional, say A˜ i = νi A˜ for some νi > 0 with a positive operator A; ˜ for instance, we can take K = κ A˜ θ with (ii) K is a function of the operator A, −∞ < θ < 1 or K = κPN , where PN is the projector on Span {ek : k = 1, . . . , N} ˜ where ei are the eigenfunctions of A. For the proof of the next theorem we need the following lemma. We set ν1 0 1 −1 Aκ = A˜ + κK0 . 0 ν2 −1 1 ˜ Assume Lemma 1.3.19. Let K0 be a nonnegative operator and D(K0 ) ⊃ D(A). ˜ 11 Then, the problem that K0 commutes with A. Yt + AκY = 0, t > 0,
Y (0) = Y0 ∈ X,
(1.3.44)
generates a strongly continuous semigroup Sκ in X. Moreover, for every 1/2 ≤ γ < 1 we have the estimate Sκ (t)Y0 γ ≤
11
That is, PN K0 and PN A˜ commute.
k1 Y0 1/2 e−k2 t , ∀Y ∈ X 1/2 , t 2γ −1
(1.3.45)
36
1 Synchronization of Global Attractors and Individual Trajectories
where k1 > 0 and k2 > 0 do not depend on κ ≥ 0. In addition, we have for β ∈ [0, 1/2) positive numbers ki > 0 independent of κ so that Sκ (t)Y0 1/2+β ≤
k1
t 1/2+β
Y0 ek2 t
for t > 0 and Y ∈ X. Proof. Using the eigenbase expansion of the unknown function Y = (U,V ) one can see that (1.3.44) generates a strongly continuous semigroup. It is sufficient that the following calculations can be made for approximations with respect to PN . Since the function Y (t) = Sκ (t)Y := (U(t),V (t)) solves the equations ˜ + K(U −V ) = 0, t > 0, in X1 , Ut + ν1 AU ˜ + K(V −U) = 0, t > 0, in X1 , Vt + ν2 AV using the multipliers A˜ 2γ U and A˜ 2γ V we obtain that 1 d ˜γ 2 A U + A˜ γ V 2 + ν1 A˜ 1/2+γ U2 + ν2 A˜ 1/2+γ V 2 ≤ 0. 2 dt
(1.3.46)
This implies A˜ γ U(t)2 + A˜ γ V (t)2 ≤ A˜ γ U(0)2 + A˜ γ V (0)2 e−2σ1 t
(1.3.47)
for every γ ≥ 0 with σ1 = λ1 min{ν1 , ν2 }. Multiplying (1.3.46) by t 2 and taking γ = 1 yield that 1 d 2 t Y 21 + min{ν1 , ν2 }t 2 Y 23/2 ≤ tY 21 ≤ c0tY 1/2 Y 3/2 2 dt 1 ≤ min{ν1 , ν2 }t 2 Y 23/2 + c1 Y 21/2 . 2
(1.3.48)
Thus, d 2 t Y 21 + min{ν1 , ν2 }t 2 Y 23/2 ≤ 2c1 Y 21/2 dt Hence, using (1.3.47) with γ = 1/2 we have that t 2 Y (t)21 ≤ 2c1
t 0
e−σ1 (t−τ ) Y (τ )21/2 d τ ≤
2c1 −σ1 t e Y0 21/2 . σ1
Therefore, the interpolation 2−2γ
t 2γ −1 Y (t)γ ≤ c2 (tY (t)1 )2γ −1 Y (t)1/2 implies (1.3.45) with k2 = σ1 /2.
(1.3.49)
1.3 Coupled Parabolic Problems: Abstract Models
We have
t
c3
0
37
Y (r)21/2 dr ≤ Y0 2 ,
which allows us to estimate the integral in (1.3.49) by c4 Y0 2 . Taking (1.3.47) into account again and using an interpolation argument, we obtain the second part of the assertion. In this case we can guarantee synchronization at the level of global attractors in the limit κ → +∞ only. In particular, we have Theorem 1.3.20. With reference (1.3.37) let the hypotheses of Theorem 1.3.12 con˜ A˜ 2 = ν2 A˜ cerning A˜ i and F˜i be in force. Assume in addition that X1 = X2 , A˜ 1 = ν1 A, 1/2
−β
with some constants ν1 , ν2 > 0 and K = κK0 : X1 → X1 , where K0 is a non˜ Then, the global attractor Aκ for the system negative function of the operator A. generated by (1.3.37) admits the additional uniform estimate t+1
sup t∈R t
˜ τ )2 + AV ˜ (τ )2 + κK 1/2 A˜ 1/2 (U(τ ) −V (τ ))2 d τ ≤ R∗ AU( 0 (1.3.50)
for every full trajectory {(U(t),V (t)) : t ∈ R} from the attractor, where R∗ is a constant independent of κ. Moreover, • The global attractor Aκ is uniformly (with respect to κ) bounded in X γ for every γ ∈ (1/2, 1) and upper semicontinuous with respect to κ ∈ [0, +∞), i.e., (1.3.32) holds for every κ0 ≥ 0. • If K is not degenerate, then in the limit κ → ∞ we have the following synchronization phenomenon dX 1/2 (Aκ , A∞ ) → 0 as κ → +∞,
(1.3.51)
where A∞ = {(U,U) : U ∈ A∗ } and A∗ is the global attractor of the system generated by Vt +
ν1 + ν2 ˜ 1 AV = (F˜1 (V ) + F˜2 (V )), t > 0, in X1 . 2 2
(1.3.52)
Moreover, Aκ = A∞ for all κ large enough in the case of identical subsystems (ν1 = ν2 , F˜1 = F˜2 ). Remark 1.3.21. In the case κ = 0 the attractor given by Theorem 1.3.20 has the form A0 = A1 × A2 , where Ai is the global attractor of the system generated by ˜ = F˜i (V ), t > 0, in X1 . Vt + νi AV So, the asymptotic dynamics of subsystems (1.3.37) is not correlated in the absence of interaction between subsystems. Instead in the limit κ → ∞ we observe the
38
1 Synchronization of Global Attractors and Individual Trajectories
asymptotic synchronization phenomenon in (1.3.51) with limiting dynamics governed by (1.3.52). Thus, when κ varies from zero to infinity the coupled system goes via different regimes from noncorrelated dynamics to synchronized behavior. At the level of trajectories the synchronization phenomenon in (1.3.51) implies the following property of solutions Y κ (t) = (U κ (t),V κ (t)) to (1.3.37): ∀ ε > 0 ∃ κ∗ > 0 : lim sup U κ (t) −V κ (t)1/2 ≤ ε , ∀ κ ≥ κ∗ . t→∞
This approximate synchronization property can also be written in the form ∀ ε > 0 ∃ κ∗ > 0 ∀ κ ≥ κ∗ ∃tκ > 0 : U κ (t) −V κ (t)1/2 ≤ ε , ∀t ≥ tκ . To see this it is sufficient to note that
U κ (t) −V κ (t)1/2 ≤c0 dX 1/2 (Y κ (t), A∞ ) ≤c0 dX 1/2 (Y κ (t), Aκ ) + dX 1/2 (Aκ , A∞ ) .
Proof. By Theorem 1.3.12 there exists a global attractor Aκ such that for any full trajectory Y (t) = (U(t),V (t)) from Aκ we have that sup ν1 A˜ 1/2U(t)2 + ν2 A˜ 1/2V (t)2 t∈R+ 1/2
+ κK0 (U(t) −V (t)2 +
t+1
Ut (τ )2 + Vt (τ )2 d τ ≤ R,
t
(1.3.53)
˜ and where R does not depend on κ ≥ 0. Using this estimate and also multipliers AU ˜ in (1.3.37) we obtain the estimate (1.3.50). AV Now we prove that the attractors are uniformly bounded in X γ for every 1/2 ≤ γ < 1 − β . For this we note that the problem in (1.3.37) can be written in the form ˜ ), t > 0, Yt + Aκ Y = F(Y
Y (0) = Y0 ∈ X,
where Y (t) = (U(t),V (t)) and ν1 0 1 −1 Aκ = A˜ + κK0 , 0 ν2 −1 1
˜ )= F(Y
(1.3.54)
F˜1 (U) . F˜2 (V )
We rewrite (1.3.54) in the mild form t
Y (t) = Sκ (t)Y0 +
0
˜ (τ ))d τ Sκ (t − τ )F(Y
and apply Lemma 1.3.19. A simple calculation leads to the existence of an absorbing set in X γ , 1/2 ≤ γ < 1, with a bound independent of κ. Thus, the attractor is uniformly bounded in X γ for every 1/2 ≤ γ < 1. Its upper semicontinuity at every point κ0 ∈ [0, +∞) follows from Proposition 1.3.14.
1.3 Coupled Parabolic Problems: Abstract Models
39
To conclude the proof in the case when κ → ∞ we use Theorem 1.2.14. For this, we need to establish the corresponding version of the convergence property in (1.2.5). Let κn → ∞, Yn ∈ Aκn and Yn → Y in X 1/2 . Owing to the estimate (1.3.53) the limiting element Y has the form Y = (U,U). Thus, we need to show that Yn (t) = φ κn (t,Yn ) → (φ0 (t,U), φ0 (t,U)) as n → ∞ for every t > 0, where φ0 is the dynamical system for the problem in (1.3.52). To show this we note that Yn (t) = (Un (t),Vn (t)) satisfies the uniform estimates in (1.3.50) and (1.3.53), which implies the relative compactness of Yn (·) in the space C([0, T ], X 1/2−ε ) ∩ L2 (0, T, X 1−ε ), ∀ ε > 0, ∀ T > 0 see Chepyzhov/Vishik [28, Chap. 2]. This allows us to make the limit transition in a weak form of the relation ˜ + ν2 AV ˜ = F˜1 (U) + F˜2 (V ) Ut +Vt + ν1 AU and obtain the relation in (1.2.5) in our case. Remark 1.3.22. In a similar way to above we can study coupled parabolic systems in some Hilbert space X1 of the form ˜ 1 + κK(U 1 −U 2 ) = F˜1 (U 1 ), Ut1 + ν1 AU ˜ j − κK(U j+1 − 2U j +U j−1 ) = F˜ j (U j ), j = 2, . . . , N − 1, Ut j + ν j AU ˜ N + κK(U N −U N−1 ) = F˜N (U N ), UtN + νN AU
(1.3.55)
where νi and κ are positive constants. This system can be written as a single equation ˜ ) in X = X1 × . . . × X1 , Yt + AY + κK Y = F(Y ˜ and where A = diag (ν1 , . . . , ν2 ) · A, ⎛ ⎛ 1⎞ U 1 −1 0 ⎜−1 2 −1 ⎜U 2 ⎟ ⎜ ⎜ 3⎟ ⎜ ⎜ ⎟ Y = ⎜ U ⎟ , K = ⎜ 0 −1 2 ⎜ .. .. .. ⎜ .. ⎟ ⎝ . . . ⎝ . ⎠ UN
0
0
⎛ ⎞ ⎞ F˜1 (U 1 ) 0 ⎜ F˜2 (U 2 ) ⎟ 0⎟ ⎜ ⎟ ⎟ ˜ 3 ⎟ 0⎟ ˜ )=⎜ ⎜ F3 (U ) ⎟ . ⎟ K, F(Y ⎜ .. ⎟ .. ⎟ ⎝ . ⎠ .⎠ 0 ... 1 F˜N (U N ) ... ... ... .. .
We assume that (i) A˜ is a positive operator with the discrete spectrum; (ii) K is a ˜ (iii) the nonlinearities F˜i (v) nonnegative operator with the domain D(K) ⊇ D(A); satisfy the same hypotheses as in Theorem 1.3.12. In this case N−1
(K Y,Y ) =
∑ K 1/2 (U j+1 −U j )2 .
j=1
40
1 Synchronization of Global Attractors and Individual Trajectories
Therefore, we can obtain the result stated in Theorem 1.3.12 for the system generated by (1.3.22) with K ≡ K+ and K∗ = 0. Moreover, in the case when K commutes with A˜ we can prove the corresponding analog of Lemma 1.3.19 and thus to obtain an analog Theorem 1.3.20. In particular, if K is not degenerate, then one can show that the global attractor Aκ for (1.3.55) in the space X 1/2 = D(A˜ 1/2 ) × . . . × D(A˜ 1/2 ) possesses the property dX 1/2 (Aκ , A∞ ) → 0 as κ → +∞, where A∞ = {(U,U, . . . ,U) : U ∈ A∗ } and A∗ is the global attractor of the system generated by 1 N 1 N νi Av = ∑ F˜i (v), t > 0, in X1 . vt + ∑ N i=1 N i=1 Moreover, in the case of identical subsystems (νi = ν , i = 1, · · · , N, F˜i (v) = F˜1 (v) for all i = 2, . . . , N) there exists κ∗ > 0 such that Aκ = A∞ for all κ ≥ κ∗ . Thus, we observe synchronization at the level of global attractors for the N-coupled system in (1.3.55). We also note that synchronization phenomena in the ODE version (νi = 0, X1 = R) of system (1.3.55) were studied in Hale [86].
1.3.6 Synchronization in Delay Systems Our main goal in this section is to demonstrate how the methods developed can be applied in the study of synchronization phenomena in evolution equations containing delay terms. These equations naturally arise in various applications, such as viscoelasticity, heat flow, neural networks, combustion, interaction of species, microbiology, and many others. The theory of delay differential equations has been developed for both ODE and PDE settings by many authors (see the discussion and the references in the monograph Chueshov [40, Chapter 6]). We deal with abstract coupled evolution equations of the form ˜ + κ(U −V ) = F˜1 (U t ), U(s) Ut + ν1 AU = U 0 (s) = U0 (s), s∈[−h,0] (1.3.56) ˜ + κ(V −U) = F˜2 (V t ), V (s) Vt + ν2 AV = V 0 (s) = V0 (s), s∈[−h,0] in a Hilbert space X1 , where A˜ is a positive self-adjoint operator, νi > 0, κ ≥ 0 h ≥ 0 are parameters, and F˜i (U t ) is a nonlinear mapping that is defined on pieces U t := {U(t + s) : s ∈ [−h, 0]} of an unknown function u which has its values in X1 , U0 , V0 : [−h, 0] → X1 are given (initial) functions. Similar to Sect. 1.3 we impose the following hypotheses.
1.3 Coupled Parabolic Problems: Abstract Models
41
Assumption 1.3.23. We assume that X1 is a separable Hilbert space with the norm · and the inner product (·, ·) and 1. A˜ is a linear positive self-adjoint operator with discrete spectrum on X1 (see Assumption 1.3.9(1)). As above, we consider the scale of spaces X1s generated by ˜ We also use the notation Cα for the Banach space powers A˜ s of the operator A. C([−h, 0], X1α ) endowed with the norm |v|Cα := sup{ A˜ α v(θ ) : θ ∈ [−h, 0]}. 2. F˜i is locally Lipschitz mapping from C1/2 into X1 , i.e., we assume that for every ρ > 0 there exists L(ρ ) such that F˜i (v1 ) − F˜i (v2 ) ≤ L(ρ )|v1 − v2 |C1/2 , v j ∈ C1/2 , |v j |C1/2 ≤ ρ , j = 1, 2. (1.3.57) Moreover, we assume12 that F˜i (v) = −Πi (v) + Fˇi (v), where Fˇi : C1/2 → X1 is linearly bounded, i.e., Fˇi (v) ≤ c1 + c2 |v|C1/2 , v ∈ C1/2 , 1/2
1/2
and Πi : X1 → X1 is a potential operator on the space X1 , i.e., there exists 1/2 the Frech´et derivative of Πi (u) on X1 given by Πi (u) in the sense of relation (1.3.18). By the above properties we can assume Πi to be bounded on bounded sets. We also assume that the potential Πi (u) is bounded from below.13 Since the potential is defined up to a constant, this means that we can assume that 1/2
Πi (u) ≥ 1, ∀ u ∈ X1 ;
(1.3.58)
We rewrite system (1.3.56) as a single equation of the form ˜ t ), t > 0, Y (s) Yt + AY + κK Y = F(Y = Φ (s) ∈ X = X1 × X1 , (1.3.59) s∈[−h,0]
12
Roughly speaking we suppose that the nonlinearity is split into a nondelay potential part and a globally Lipschitz delay term. 13 We can relax this condition by changing (1.3.58) into the relation in (1.3.21) with A ˜ i = νi A˜ on X1 . However, this requires some additional calculations and hypotheses concerning delay terms in the study of asymptotic dynamics.
42
1 Synchronization of Global Attractors and Individual Trajectories
where Y (t) = (U(t), v(t)), Φ (s) = (U0 (s),V0 (s)), and F˜1 (U t ) ν1 0 1 −1 t ˜ ˜ . A=A , K = , F(Y ) = ˜ t F2 (V ) 0 ν2 −1 1 We introduce the following definition. Definition 1.3.24 (Mild Solution). A function Y = (U(t),V (t)) ∈ C([−h, T ], X 1/2 ) is said to be mild solution to (1.3.56) (or (1.3.59)) on an interval [0, T ) if Y (t) = Φ (t) for all t ∈ [−h, 0] and t
Y (t) = S(t)Y (0) + 0
˜ τ ) d τ , t ∈ [0, T ]. S(t − τ ) −κK Y (τ ) + F(Y
Here and below we denote by Y t an element in the space C 1/2 := C([−h, 0], X 1/2 ), 1/2
1/2
X 1/2 = X1 × X1 , of the form Y t (θ ) ≡ Y (t + θ ), θ ∈ [−h, 0]. S is the semigroup generated by A. The proof of the following theorem can be found in Chueshov [40, Section 6.1]. Theorem 1.3.25. Let Assumption 1.3.23 be in force. Then, for every Φ ∈ C 1/2 the problem in (1.3.56) has a unique mild solution on R+ lying in C([−h, +∞), X 1/2 ). Now, following the standard procedure (see, for example, [40] and the references therein) we can define a family of mappings φ (t, ·) : C 1/2 → C 1/2 by the formula
φ (t, Φ )(θ ) = Y (t + θ ), Φ ∈ C 1/2 , where u(t) solves (1.3.56) with the initial data Φ . One can see that • or each t ∈ R+ the mapping φ (t) is continuous on C 1/2 ; • the family φ satisfies the semigroup property on C 1/2 ; • The function t → φ (t, Φ ) is continuous in C 1/2 for every Φ ∈ C 1/2 . Thus, the problem in (1.3.56) generates a dynamical system φ with the phase space C 1/2 . Theorem 1.3.26 (Dissipativity). Assume that Assumption 1.3.23 is in force. Assume in addition that there exist δ > 0, κ > 0 and c ≥ 0 such that − (Πi (u), u) ≤
νi ˜ 1/2 2 1/2 A u − δ Πi (u) + c, ∀ u ∈ X1 ; 2
(1.3.60)
and Fˇi (v)2 ≤ c + cFˇ
0 −h
A˜ 1/2 v(θ )2 σ (d θ ) for all v ∈ C1/2 ,
(1.3.61)
where σ (d θ ) is a measure on B([−h, 0]) such that σ ([−h, 0]) = 1. Then there exists c∗Fˇ = c∗Fˇ (ν1 , ν2 , λ1 ) > 0 such that the problem in (1.3.56) generates a dissipative
1.3 Coupled Parabolic Problems: Abstract Models
43
dynamical system φ on C 1/2 provided cFˇ < c∗Fˇ . Moreover, there exists R > 0 independent of κ such that the set ! " B=
(u, v) ∈ C 1/2 : ν1 |u|2C1/2 + ν2 |v|2C1/2 + κ sup u(θ ) − v(θ )2 ≤ R2 [−h,0]
(1.3.62) is absorbing. We note that condition (1.3.61) concerning the delay term Fˇi admits both point and distributed delays. For instance, we can consider 0 m ˇ Fi (v) = g ∑ c j A˜ β j v(−h j ) + A˜ β0 v(−θ ) f (θ )d θ , v ∈ C1/2 , −h0
j=1
where g is globally Lipschitz mapping on X1 , c j ∈ R, β j ∈ [0, 1/2], h ≥ hi ≥ 0 are fixed constants and f ∈ L1 (−h0 , 0) is a real function. We also note that the restriction concerning the intensity parameter cFˇ in the statement of Theorem 1.3.26 is not surprising. Solutions with the growing norms are possible. For further discussion we refer the reader to Chueshov [40, Section 6]. Proof of Theorem 1.3.26. We use the same idea as in the proof of Theorem 6.1.15 in Chueshov [40]. The argument below is formal. It can be justified via Galerkin approximations. Multiplying equation (1.3.59) by Y we obtain that 1d Y (t)2 + A1/2Y (t)2 + κU(t) −V (t)2 2 dt + (Π1 (U(t)),U(t)) + (Π2 (V (t)),V (t)) = (Fˇ1 (U t ),U(t)) + (Fˇ2 (V t ),V (t)) 1 c 1 + ≤ η A1/2Y (t)2 + 4ηλ1 ν1 ν2 0 cˇ 1 1 + A1/2Y t (θ )2 σ (d θ ) (1.3.63) + F 4ηλ1 ν12 ν22 −h for every η > 0. Indeed, we have
η 1/2
1/2 ν1 A˜ 1/2U(t) 1/2 1/2 1/2 λ1 ν1 η c 1/2 ˜ 1/2 ≤η ν1 A U(t)2 + 4ηλ1 ν1 0 cFˇ 1/2 + ν A˜ 1/2U t (θ )2 σ (d θ ) 4ηλ1 ν12 −h 1
(Fˇ1 (U t ),U(t)) ≤ Fˇ1 (U t )
44
1 Synchronization of Global Attractors and Individual Trajectories
and similar for Fˇ2 where we have applied (1.3.61). Using the multiplier Yt we also have
d 1 1/2 κ A Y (t)2 + U(t) − v(t)2 + Π1 (U(t)) + Π2 (V (t)) Yt (t)2 + dt 2 2 t ˇ = (F1 (U ),Ut (t)) + (Fˇ2 (V t ),Vt (t)) 0 cˇ 1 1 1 + A1/2Y t (θ )2 σ (d θ ). (1.3.64) ≤ Yt (t)2 + c + F 2 2 ν1 ν2 −h Now, similar to Theorem 6.1.15 in [40], we consider the function W (t) =
1 Y (t)2 + A1/2Y (t)2 + κU(t) − v(t)2 2 + Π1 (U(t)) + Π2 (v(t)) + μ W0 (t), (1.3.65)
where μ is a positive parameter and W0 (t) =
1 h
0 −h
t
ds t+s
A1/2Y (θ )2 d θ +
0 −h
σ (ds)
t t+s
A1/2Y (θ )2 d θ .
Since σ ([−h, 0]) = 1 0 ≤ W0 (t) ≤ 2
t t−h
A1/2Y (s)2 ds ≤ 2h max{ν1 , ν2 }|Y t |2C
1/2
and dW0 (t) 1 = 2A1/2Y (t)2 − dt h
0 −h
A1/2Y t (s)2 ds −
0 −h
A1/2Y t (s)2 σ (ds).
Since Π (u) is bounded on every bounded set, by the assumptions on Π , we conclude from (1.3.58) that 1 1/2 A Y (t)2 + κU(t) −V (t)2 ≤ W (t) 2 κ ≤ ψ (A1/2Y (t)) + U(t) −V (t)2 + 2μ h max{ν1 , ν2 }|Y t |2C 1/2 2
(1.3.66)
with an appropriate function where ψ (r) → +∞ as r → +∞. In the same way as in Chueshov [40, Section 6.1] it follows from (1.3.63) and (1.3.64) and also from (1.3.60) that d 1 W (t) + β W (t) + Yt (t)2 dt 2
1 β c1 (ν1 , ν2 ) − η − 2μ − ≤− A1/2Y (t)2 1+ 2 2 λ1
1.3 Coupled Parabolic Problems: Abstract Models
45
β −κ 1− U(t) −V (t)2 − (δ − β )(Π1 (U(t)) + Π2 (V (t))) 2 0 cˇ c2 (ν1 , ν2 ) + −μ + F c1 (ν1 , ν2 ) + A1/2Y t (θ )2 σ (d θ ) 2 2ηλ1 −h 0 μ + − + 2β μ A1/2Y t (θ )2 d θ +Cη (ν1 , ν2 ), h −h where c1 (ν1 , ν2 ) = ν1−1 + ν2−1 and c2 (ν1 , ν2 ) = ν1−2 + ν2−2 and Cη collects all the appearing constants from (1.3.63) and (1.3.60). This yields d 1 W (t) + β W (t) + Yt (t)2 ≤ b, t > 0, dt 2
(1.3.67)
for some β , b > 0 independent of κ provided that
1 β c1 (ν1 , ν2 ) − η − 2μ − ≥ 0, 2 − β > 0, δ − β ≥ 0, 1+ 2 2 λ1 and
cˇ μ − + 2β μ ≤ 0, − μ + F h 2
c2 (ν1 , ν2 ) c1 (ν1 , ν2 ) + 2ηλ1
≤0
These relations hold with η = 1/4 and with β > 0 small enough if we assume that 2c2 (ν1 , ν2 ) 1 cFˇ c1 (ν1 , ν2 ) + < 2μ < . λ1 4 Thus, under the condition 1 2c2 (ν1 , ν2 ) −1 cFˇ < c1 (ν1 , ν2 ) + 4 λ1 we can find appropriate μ and prove (1.3.67) with positive β , b and μ in (1.3.65) independent of κ. Relation (1.3.67) implies that 1 W (t + 1) + 2
t+1 t
Yt (τ )2 d τ ≤ W (0)e−β t +
b 1 − e−β t , t > 0. β
Therefore, using (1.3.66) we can conclude for every bounded set D in C 1/2 there is t∗ (κ, D) such that the set B in (1.3.62) possesses the property φ (t, D) ⊂ B for all t ≥ t∗ (κ, D). Remark 1.3.27. The function W (t) generates a continuous functional on the space C 1/2 according to the formula W [Y ] =
1 Y (0)2 + A1/2Y (0)2 + κU(0) −V (0)2 2 + Π1 (U(0)) + Π2 (V (0)) + μ W0 [Y ],
46
1 Synchronization of Global Attractors and Individual Trajectories
where Y = (u, v) ∈ C 1/2 and W0 [Y ] =
1 h
0 −h
0
ds s
A1/2Y (θ )2 d θ +
0 −h
σ (ds)
0 s
A1/2Y (θ )2 d θ .
Under the conditions of Theorem 1.3.26 with an appropriate choice of μ from (1.3.67) we have that W [φ (t,Y )] ≡ W [Y t ] ≤ W [Y ]e−β t +
b 1 − e−β t , t > 0, β
for every Y ∈ C 1/2 . This implies that for every ρ > bβ −1 the set B = Y ∈ C 1/2 : W [Y ] ≤ ρ is a forward invariant absorbing set which belongs to the set B given in (1.3.62). Now we consider the compactness and asymptotic properties of the dynamical system φ generated by (1.3.56) in C 1/2 . For this in the space C1/2 we introduce the following H¨older-type subspace # $ Yβ = v ∈ C1/2 : |v|Yβ < ∞ , where 0 < β ≤ 1 and |v|Yβ = max A˜ 1/2+β v(θ ) + θ ∈[−h,0]
A˜ 1/2 [v(θ1 ) − v(θ2 )] . | θ1 − θ2 | β θ1 ,θ2 ∈[−h,0] max
By the Arzel`a–Ascoli Theorem (see, for example, Lemma A.3.5 in the Appendix of Chueshov [40]) Yβ is a Banach space that is compactly embedded in C1/2 for β > 0. We also consider the space Y β = Yβ × Yβ , which is compactly embedded into C 1/2 . Proposition 1.3.28 (Conditional Compactness and Quasi-Stability). Let the hypotheses of Theorem 1.3.26 be in force. Let B be a positively invariant bounded set in C 1/2 for φ . Then, 1. For every t > h the set φ (t, B ) is bounded in C 1/2+β for arbitrary β ∈ (1/2). Moreover,
φ (t, B ) ⊂ {u ∈ C 1/2+β : uC
1/2+β
≤ R0δ } for all t ≥ δ + h,
where ˜ δ 1/2−β R0δ = CB δ −2β + cβ KB (F) with CB = sup{Y C
1/2
˜ = sup{F(Y ˜ ) : Y ∈ B }. : Y ∈ B } and KB (F)
(1.3.68)
1.3 Coupled Parabolic Problems: Abstract Models
47
2. For every t > h the set φ (t, B ) is bounded in Y β for arbitrary β ∈ (1/2). Moreover, for every δ > 0 there exists Rδ such that
φ (t, B ) ⊂ Bβ = {u ∈ Y¯β : |u|Y¯β ≤ Rδ } for all t ≥ δ + h.
(1.3.69)
where Rδ may depend on κ and the size of B . In particular, this means that the system φ is asymptotically compact (see Definition 1.2.4). 3. The mapping φ (t, ·) is Lipschitz from B into Y β . Moreover, for every h < a < b < +∞ there exists a constant MB (a, b) such that |φ (t,U 0 ) − φ (t, U˜ 0 )|Y ≤ MB (a, b)|U 0 − U˜ 0 |C β
1/2
, t ∈ [a, b], U 0 , U˜ 0 ∈ B .
In particular, this means that the system φ is quasi-stable on C 1/2 at any time from the interval [a, b]. Proof. We first note that every solution Y satisfies the equation t
Y (t) = Sκ (t)Y (s) +
s
˜ τ ) d τ , t > s, Sκ (t − τ )F(Y
where Aκ = A + Kκ generates the strongly continuous semigroup Sκ . It follows from Lemma 1.3.19 that for every 1/2 ≤ γ ≤ 1 we have the estimate c1 Y 1/2 e−c2 t ∀Y ∈ X 1/2 , t 2β c1 ≤ β +1/2 Y e−c2 t , ∀Y ∈ X, t > 0 t
Sκ (t)Y 1/2+β ≤ Sκ (t)Y 1/2+β
(1.3.70)
for every 0 ≤ β < 1/2, where c1 > 0, c2 > 0 and c1 , c2 > 0 do not depend on κ ≥ 0. Let β ∈ (0, 1/2) and Y (t) = φ (t,Y 0 ). It follows from (1.3.6) and (1.3.70) that Y (t)1/2+β ≤
C1 t −s
2β
Y (s)1/2 +C2
t s
1 t −τ
1/2+β
˜ τ ) d τ F(Y
for all t > s > 0. Since φ (t,Y 0 ) ∈ B for all t ≥ 0 we have that Y (t)1/2 ≤ CB and |Y t |C¯1/2 ≤ CB for all t ≥ 0 and an appropriate CB < ∞. Thus, Y (t)1/2+β ≤
CB ˜ − s|1/2−β + cβ KB (F)|t (t − s)2β
˜ = sup{F(Y ˜ ) : Y ∈ B }. Indeed, F(Y ˜ τ ) can be for all t > s ≥ 0, where KB (F) τ estimated by (1.3.57) and the fact that Y is bounded in C 1/2 . Taking s = t − δ we obtain the relation in (1.3.68). Now we proof the H¨older property of elements from φ (t, B ). We use the representation
Y (t2 ) −Y (t1 ) = Sκ (t2 − t1 ) − 1 Y (t1 ) +
t2
t1
˜ τ ) d τ , t2 ≥ t1 > 0. Sκ (t2 − τ )F(Y
48
1 Synchronization of Global Attractors and Individual Trajectories
Since Aκ is a positive operator, one can show (see Eq. (4.1.8) in Chueshov [40]) that −1/2
Sκ (t) − 1 ≤ t 1/2 .
Aκ 1/2
Since Aκ Y and A1/2Y define the same topology in X 1/2 we get (1.3.69) by t 1/2+β 2 1/2 Y (t1 ) |t2 − t1 |1/2 Aκ 1 τ ˜ + Aκ Sκ (t2 − τ )F(Y ) d τ β β |t2 − t1 | |t2 − t1 | t1 t2 1 1 1/2+β ˜ ≤cβ Aκ Y (t1 ) + KB (F) dτ β |t2 − t1 | t1 |t2 − τ |1/2 for t2 − t1 > 0. To prove the third part of the statement we use the technique presented above and take into account that F˜ is locally Lipschitz continuous. Proposition 1.3.28 and Theorem 1.3.26 allow us to use the results of Sect. 1.2.3 to prove the following assertion. Theorem 1.3.29 (Global Attractor). Let the hypotheses of Theorem 1.3.26 hold and φ be the system generated by (1.3.56). Then, this system possesses a global attractor Aκ for every κ ≥ 0. This attractor is a bounded set in the space Y β for every β ∈ (1/2, 1) and has a finite fractal dimension. Moreover, Aκ ⊂ {u ∈ C 1/2+β : |u|C
1/2+β
≤ R0 } for every β ∈ [0, 1/2),
where R0 does not depend on κ, and for trajectory Y (t) from the attractor we have the uniform estimate Y t C
t+1 1/2+β
+ t
Y t (τ )2 d τ ≤ R21 , t ∈ R,
where R1 does not depend on κ. The question on asymptotic synchronization (κ → +∞) in parabolic systems with delay term is not so simple in the nondelay case. The main reason is that we do not know how to obtain uniform estimates for the global attractor in the space Y β or in other spaces that are compactly embedded in C 1/2 . Nevertheless, some fact on synchronization of identical coupled delay systems can be proved. Let us assume that ν1 = ν2 = ν > 0 and F˜1 = F˜2 in (1.3.56). Thus, we consider the system ˜ + κ(U −V ) = F˜1 (U t ), U(s) Ut + ν Au = U 0 (s) = U0 (s), s∈[−h,0] (1.3.71) ˜ + κ(V −U) = F˜1 (V t ), V (s) Vt + ν Av = V 0 (s) = V0 (s). s∈[−h,0]
1.3 Coupled Parabolic Problems: Abstract Models
49
Theorem 1.3.30 (Synchronization). Consider equations (1.3.71). Assume the hypotheses of Theorem 1.3.26. In addition, assume that there exist a constant κR > 0 and a Borel measure σ (d θ ) on [−h, 0] such that σ ([−h, 0]) = 1 and F˜1 (u) − F˜1 (v)2 ≤ κR
0 −h
A˜ 1/2 [u(θ ) − v(θ )]2 σ (d θ )
for all u, v ∈ C1/2+β such that |u|C1/2+β ≤ R, |u|C1/2+β < R, where β is the same as in the statement of Theorem 1.3.29. Then there exists κ∗ > 0 such that for every κ ≥ κ∗ we have that lim eγ t U(t) −V (t)1/2 = 0 for some γ = γ (κ) > 0 (1.3.72) t→∞
for every solution (U(t),V (t)) to the problem in (1.3.71). Thus, we observe asymptotic synchronization of each trajectory with exponential speed. For κ ≥ κ∗ we also have that Aκ = {(u, u) : u ∈ A}, where A is the global attractor of the system generated in C1/2 by the problem ˜ = F˜1 (U t ). Ut + ν AU Thus, we have synchronization at the level of global attractors. Proof. We use the same idea as in the proof of dissipativity in Theorem 1.3.26. Let (U,V ) be a solution. Then, Z(t) = U(t) −V (t) satisfies the equation ˜ + 2κZ = F˜1 (U t ) − F˜1 (V t ). Zt + ν AZ
(1.3.73)
Owing to Proposition 1.3.28(1), the solution (U(t),V (t)) possesses the property U(t)21/2+β + V (t)21/2+β ≤ R2 for all t ≥ tˆ, where R does not depend on κ. The calculations below are performed for t ≥ tˆ. Multiplying equation (1.3.73) by Z we obtain that 1d Z(t)2 + ν A˜ 1/2 Z(t)2 + 2κZ(t)2 = (F˜1 (U t ) − F˜1 (V t ), Z(t)) 2 dt κR 0 ˜ 1/2 t ≤ A Z (θ )2 σ (d θ ) + κZ(t)2 . 4κ −h Thus, d κR Z(t)2 + 2ν A˜ 1/2 Z(t)2 + 2κZ(t)2 ≤ dt 2κ
0 −h
A˜ 1/2 Z t (θ )2 σ (d θ ). (1.3.74)
50
1 Synchronization of Global Attractors and Individual Trajectories
Using the multiplier Zt as in the proof of Theorem 1.3.26 we also have
d 1 ˜ 1/2 2 2 ν A Z(t) + κZ(t) = (F˜1 (U t ) − F˜1 (V t ), Zt (t)) Zt (t) + dt 2 κR 0 ˜ 1/2 t 2 A Z (θ )2 σ (d θ ). ≤ Zt (t) + 4 −h 2
This implies that κ d ˜ 1/2 R ν A Z(t)2 + 2κZ(t)2 ≤ dt 2
0 −h
A˜ 1/2 Z t (θ )2 σ (d θ ).
(1.3.75)
Now we consider the function W (t) = μ0 Z(t)2 + ν A˜ 1/2 Z(t)2 + 2κZ(t)2 + μ W0 (t), where μ0 and μ are positive parameters and W0 (t) =
1 h
0 −h
t
ds t+s
A˜ 1/2 Z(θ )2 d θ +
0 −h
σ (ds)
t t+s
A˜ 1/2 Z(θ )2 d θ .
Similar to the proof in Theorem 1.3.26 0 ≤ W0 (t) ≤ 2
t t−h
A˜ 1/2 Z(s)2 ds
and dW0 (t) 1 = 2A˜ 1/2 Z(t)2 − dt h
0 −h
A˜ 1/2 Z t (s)2 ds −
0 −h
A˜ 1/2 Z t (s)2 σ (ds).
It follows from (1.3.74) and (1.3.75) that d W (t) + β W (t) dt
β ≤ (−2ν μ0 + β ν + 2μ ) A˜ 1/2 Z(t)2 − 2κ μ0 − Z(t)2 2 κR μ0 κR 0 ˆ 1/2 t + + −μ + ˜A Z (θ )2 σ (d θ ) 2κ 2 −h μ 0 + − + 2β μ A1/2 Z t (θ )2 d θ . h −h
This yields d W (t) + β W (t) ≤ 0, t > 0, dt
(1.3.76)
1.3 Coupled Parabolic Problems: Abstract Models
51
for some β > 0 provided that −2ν μ0 + β ν + 2μ ≤ 0, μ0 −
β ≥ 0, 2
and −μ +
κR μ 0 κR μ + ≤ 0, − + 2β μ ≤ 0 2κ 2 h
These relations hold with β = min{μ0 , (2h)−1 } and μ = μ0 ν /2 if we have that ν κ κ R R − ≤ 0. − μ0 + 2 2κ 2 This is true when
ν κR ν κR − ≥ 0, − μ0 + ≤ 0. 4 2κ 4 2 Thus, choosing κ ≥ κ∗ = 2κR ν −1 and μ0 = 2κR ν −1 we obtain (1.3.76). This relation implies that W (t) ≤ W (tˆ)e−β (t−tˆ) , t ≥ tˆ. This implies the conclusion in (1.3.72). Now, by the standard method (see the argument in the proof of Theorem 1.3.17), we obtain the conclusion concerning the structure of the attractor Aκ for large κ. As in the purely parabolic case (see Theorem 1.3.16 and Remark 1.3.18) we can also obtain a result on the complete replacement synchronization for the system in (1.3.71). Owing to the symmetry of the equations in (1.3.71) it is obvious that we can choose as a synchronizing variable either v or u.
1.3.7 Synchronization by Means of Finite-Dimensional Coupling One can see from the argument given in Theorem 1.3.17 and Remark 1.3.18 that asymptotic synchronization can be achieved even with a finite-dimensional coupling operator. Indeed, with reference to the system ˜ + κK0 (U −V ) = F˜1 (U), t > 0, in X1 , Ut + AU ˜ + κK0 (V −U) = F˜1 (V ), t > 0, in X1 , Vt + AV the only condition we need is 1/2
˜ w) + 2κ(K0 w, w) ≥ cw2 , ∀ w ∈ X , (Aw, 1 with appropriate c depending on the size of an absorbing ball. If K is a strictly positive operator, then this requirement holds for large intensity parameter κ. However,
52
1 Synchronization of Global Attractors and Individual Trajectories
it is not necessary to assume nondegeneracy of the operator K to guarantee large sκ . For instance, if K0 = PN is the orthoprojector onto Span{ek : k = 1, 2, . . . , N} for the ˜ then eigenelements ek of A, ˜ w)+2κ(K0 w, w) ≥ (Aw,
∞
N
∑ (λk + 2κ)|(w, ek )|2 + ∑
k=1
λk |(w, ek )|2
k=N+1
N
≥(λ1 + 2κ) ∑ |(w, ek )|2 + λN+1 k=1
∞
∑
|(w, ek )|2
k=N+1
≥ min{λ1 + 2κ, λN+1 }w . 2
Thus, if 2κ ≥ λN+1 − λ1 , then we can guarantee a large lower bound by an appropriate choice of N. This fact admits some generalization, which is based on an assumption that K0 is a “good” approximation (in some sense) of a strictly positive operator. Let V ⊂ X1 be separable Hilbert spaces and K, L be linear operators from V into X1 . Assume that L is strictly positive on V , i.e., there exists aL > 0 such that (Lu, u) ≥ aL u2 , u ∈ V. We introduce the value e(L, K) := eVX1 (L, K) = sup{Lu − Ku : uV ≤ 1}. In the case when L is the identity operator, this value is known (see Aubin [6]) as the global approximation error in X1 arising in the approximation of elements v ∈ V by elements Kv. 1/2 Now we take V = X1 . It follows from the definition that 1/2 Lu − Ku ≤ e(L, K)A˜ 1/2 u, u ∈ X1 .
In this case we obtain ˜ w) + 2κ(K0 w, w) =A˜ 1/2 w2 + 2κ(Lw, w) + 2κ((K0 − L)w, w) (Aw, ≥A˜ 1/2 w2 + 2aL κw2 − 2κe(L, K0 )wA˜ 1/2 w ≥ 2aL κ − κ 2 e2 (L, K0 ) w2 Thus, according to Theorem 1.3.17 and Remark 1.3.18, under the condition 2aL κ − κ 2 e2 (L, K0 ) ≥ s∗ for an appropriate s∗ we have asymptotic synchronization. The latter inequality is valid, when e2 (L, K0 ) ≤ aL κ −1 and κ ≥ s∗ a−1 L , for instance. So, e(L, K0 ) should be small and κ large. We −1/2 note that in the case L = id and K0 = PN we have e(L, K0 ) = λN+1 and aL = 1. Therefore, the inequalities above can be realized for some choice κ and N.
1.3 Coupled Parabolic Problems: Abstract Models
53
Now we describe another situation, where synchronization can be achieved with finite-dimensional coupling. For this we use interpolation operators related to a finite family L of linear continuous functionals {l j : j = 1, . . . , N} on X1σ for some 0 < σ < 1/2. Following Chueshov [36] (see also Chueshov/Lasiecka [46, 47], Chueshov/Schmalfuss [56]), we introduce the notion of completeness defect of a set L of linear functionals on X1σ with respect to X1 by the formula σ εL := εL (X1σ , X1 ) = sup w : w ∈ X1σ , l(w) = 0, l ∈ L , w σ ≤ 1 . (1.3.77) σ ≥ ε σ provided that SpanL ⊂ SpanL and ε σ = 0 if and only It is clear that εL 1 2 L2 L 1 if the class of functionals L is complete in X1σ , i.e., the property l(w) = 0 for all l ∈ L implies w = 0. For further properties of the completeness defect we refer the reader to Chueshov [36, Chapter 5]. We define the class RL of so-called interpolation operators, which are related to the set of functionals given. We say that an operator K belongs to RL if it has the form N
Kv =
∑ l j (v)ψ j ,
∀ v ∈ X1σ ,
(1.3.78)
j=1
1/2
where {ψ j } is an arbitrary finite set of elements from X1 . An operator K ∈ RL is called Lagrange interpolation operator, if it has the form (1.3.78) with {ψ j } such that lk (ψ j ) = δk j . In the case of Lagrange operators, we have that l j (u − Ku) = 0 and thus (1.3.77) yields σ u − Ku ≤ εL uσ , v ∈ X1σ .
Hence, the smallness of the completeness defect is an important requirement from the point of view of synchronization. We refer the reader to Chueshov [36, Chapter 5] for the properties of this characteristic and for a description of sets of functionals with small εL . The simplest example are modes. In this case, L = {l j (u) = (u, e j ) : ˜ The operator PL j = 1, 2, . . . , N}, where {ek } are eigenfunctions of the operator A. given by N
PL v =
∑ (e j , v)e j ,
∀ v ∈ X1σ ,
j=1
−σ is the Lagrange interpolation operator. Moreover, εL = e(id, PL ) = λN+1 . Thus, the completeness defect (and the global approximation error) can be made small after an appropriate choice of N. This shows that the situation with K = PN can be included in the general framework.
Now, with reference to these observations, we consider the following system ˜ + κKL (U −V ) = F˜1 (U), t > 0, in X1 , Ut + AU ˜ + κKL (V −U) = F˜1 (V ), t > 0, in X1 , Vt + AV
(1.3.79)
54
1 Synchronization of Global Attractors and Individual Trajectories
coupled with the help of a finite-dimensional interpolation operator of the form N
KL v =
∑ l j (v)ψ j ,
∀ v ∈ X1σ ,
j=1
where L = {l j } is a set of linear functionals on X1σ with some σ < 1/2 and {ψ j } are elements from X1 . Below, we assume that the global approximation error eσL := eXX1σ (id, KL ) = sup{u − KL u : uσ ≤ 1} 1
is small enough. The following assertion is a direct consequence of Theorem 1.3.12. Lemma 1.3.31. We consider the system (1.3.79). We assume that A˜ and F˜1 satisfy the conditions of Theorem 1.3.12. Then, for every κ ≥ 0 (1.3.79) has a global attractor Aκ that possesses the property Aκ ⊂ D0 ⊂ {Y = (u, v) : A˜ 1/2 u2 + A˜ 1/2 v2 + κu − v2 ≤ R2 }, where R is a constant depending on an upper bound for κ · eσL . Proof. We apply Theorem 1.3.12 with 1 −1 1 −1 id − KL . K+ = κ id, K∗ = −κ −1 1 −1 1 It is clear that (1.3.28) holds with α = σ and k∗ , which is any number greater than √ 2κeσL . This lemma makes it possible to prove the following result on synchronization. Theorem 1.3.32. With reference to the system (1.3.79) we assume that A˜ and F˜1 satisfy the conditions of Theorem 1.3.12. In particular, we assume (1.3.17). Then there exist κ∗ > 0 and ε∗ (κ) such that every solution (U(t),V (t)) to the problem in (1.3.79) is asymptotically synchronized in the sense that lim eγ t U(t) −V (t)1/2 = 0 for some γ = γ (κ) > 0 t→∞
in the case when κ ≥ κ∗ and eσL ≤ ε∗ (κ). For this choice of parameters we also have that Aκ = {(u, u) : u ∈ A}, where A is the global attractor of the system gen1/2 ˜ = F˜1 (U). Thus, we have synchronization at erated in X1 by the problem Ut + AU the level of global attractors. Proof. Under the condition κeσL ≤ k∗ it follows from Lemma 1.3.31 that for every solution (U(t),V (t)) we have that A1/2U(t)2 + A1/2V (t)2 ≤ R2 for all t ≥ t∗ .
1.4 Reaction–Diffusion Systems
55
Therefore, W (t) = U(t) −V (t) satisfies the relation 1d W (t)2 + A˜ 1/2W (t)2 + 2κW (t)2 2 dt ≤ κeσL W (t)2 + 2L(R)W (t)A˜ 1/2W (t) for t ≥ t∗ . Thus,
d W (t)2 + 2γ∗ W (t)2 ≤ 0 dt
for t ≥ t∗ , where 2γ∗ = λ1 + 4κ − κeσL − 4L(R)2 . Now we can fix k∗ and choose κ such that λ1 + 4κ ≥ k∗ + 4L(R)2 . Now we can choose the parameter eσL small enough such that κeσL ≤ k∗ . Further argument is standard (see, for example, the proof of Theorem 1.3.6).
1.4 Reaction–Diffusion Systems In this section we consider synchronization phenomena for several reaction– diffusion PDE models. We apply the theory developed in the previous section.
1.4.1 Coupling Inside Domains One of the standard applications of the results presented above is a system of two semilinear parabolic equations. In a bounded domain O ⊂ Rd with a sufficiently smooth boundary we consider the following problem ut − ν1 Δ u + κ(u − v) = f˜1 (u, ∇u), vt − ν2 Δ v + κ(v − u) = f˜2 (v, ∇v),
(1.4.1)
endowed with boundary and initial conditions u∂ O = v∂ O = 0, ut=0 = u0 , vt=0 = v0 . Here, f˜i (u, ξ ) are functions on R1+d , which are specified below, and νi > 0 and κ ≥ 0 are parameters. We consider (1.4.1) in the space X = X1 × X2 , where X1 = L2 (O) and suppose that A˜ = −Δ on the domain ˜ = H 2 (O) ∩ H01 (O) := u ∈ L2 (O) : ∂x x u ∈ L2 (O), u = 0 , D(A) i j ∂O
56
1 Synchronization of Global Attractors and Individual Trajectories
where we use the notation H s (O) for the Sobolev–Slobodeckij space of order s > 0 and H0s (O) denotes the closure of C0∞ (O) in H s (O). It is well-known that D(A˜ 1/2 ) = H01 (O), see, for example, Henry [92]. The nonlinear mapping F˜i is defined by the relation [F˜i (u)](x) = f˜i (u(x), ∇u(x)), u ∈ H01 (O), i = 1, 2. The mapping F˜i satisfies hypotheses of Theorems 1.3.11 and 1.3.12 if we assume that f˜i : R1+d → R possesses a polynomially bounded main part, i.e., f˜i (u, ξ ) = f0i (u) + fˇi (u, ξ ), i = 1, 2,
(1.4.2)
where fˇi : R1+d → R is Lipschitz, i.e., there exists C > 0 such that | fˇi (u, ξ ) − fˇi (u∗ , ξ ∗ )| ≤ C(|u − u∗ | + |ξ − ξ ∗ |), and f0i : R1 → R satisfies the inequality | f0i (u) − f0i (v)| ≤ C(1 + |u|r−1 + |v|r−1 )|u − v|,
(1.4.3)
where r ∈ [1, +∞) when d ≤ 2 and r ≤ 2(d − 2)−1 for d ≥ 3. Moreover, lim inf |s|→∞
− f0i (s) = ∞. s
(1.4.4)
Under the conditions above, u → f0i (u(·)) has a potential Πi (u) on D(A˜ 1/2 ), which is given by
u(x) i Πi (u) = − f0 (ξ ) d ξ dx, [Πi (u)](x) = − f0i (u(x)) O
0
for u ∈ H01 (O). We also assume that f0i (u)u − δ
u 0
f0i (ξ )d ξ ≤ μ u2 +C, u ∈ R,
(1.4.5)
for some δ > 0, where μ < λ1 νi , where λ1 is the first eigenvalue of the operator −Δ with the Dirichlet boundary conditions. This condition is obviously true if we assume that f0i (u)u ≤ −c0 |u|r+1 + c1 , u ∈ R, for some c0 > 0 and c1 ≥ 0, where r is the same as in (1.4.3) (another possibility to satisfy (1.4.5) is discussed in Remark 1.3.13). We use the following continuous embedding (see Adams [1]): d ≤ 2, ∀ p < ∞; H 1 (O) ⊂ L p (O) if d > 2, p ≤ 2d(d − 2)−1 ,
1.4 Reaction–Diffusion Systems
57
to show that all conditions of Theorems 1.3.17 and 1.3.20 are satisfied with this choice of operators. In particular, (1.4.5) gives (1.3.26). We also note that if fˇi ≡ 0, then we do not need to impose the super-linearity condition in (1.4.4) to satisfy (1.3.27) because the coupling form is symmetric for (1.4.1). In this case, (1.4.4) can be changed into lim sup|s|→∞ {s−1 f0i (s)}¡λ1 νi . After these preliminaries, we can state the following assertion. Theorem 1.4.1. Under the conditions above, the problem in (1.4.1) generates a dynamical system φ κ in the space X 1/2 = H01 (O) × H01 (O) for each κ ≥ 0. Moreover, the following assertions hold. • There exists a global attractor Aκ of finite fractal dimension. • The attractors Aκ are upper semicontinuous for every κ ∈ [0, +∞). • In the limit κ → +∞, we have that Aκ → A∞ in the sense that dX 1/2 {Aκ , A∞ } → 0 as κ → +∞, where A∞ = {(w, w) : w ∈ A∗ } and A∗ is the global attractor of the system generated by wt −
ν1 + ν2 1 Δ w = ( f˜1 (w, ∇w) + f˜2 (w, ∇w)), w|∂ O = 0. 2 2
• In the case when ν1 = ν2 = ν and f˜1 (w, ξ ) = f˜2 (w, ξ ) there exists κ∗ > 0 such that Aκ = {(w, w) : w ∈ A∗ } for all κ ≥ κ∗ , where A∗ is the global attractor of the system generated by wt − νΔ w = f˜(w, ∇w), w|∂ O = 0. Remark 1.4.2. The main result in Theorem 1.4.1 means synchronization at the level of the attractors. We also note that synchronization phenomena in this problem were considered in Carvalho/Rodrigues/Dlotko [25] in the class of smooth solutions in the case when ν1 = ν2 and fˇi (u, ξ ) ≡ 0. Our hypotheses in Theorem 1.4.1 are not optimal and can be relaxed in several directions. For instance, we can include delay terms and deal with the reaction terms of the form f˜i (u(t)) = fˆ0i (u(t)) + fˇi (u(t − h)). In this case, we can apply the theory developed in Sect. 1.3.6.
1.4.2 Quasi-Stationary Sine-Gordon Model We consider a quasi-stationary version of coupled sine-Gordon equations that describes the dynamics of Josephson junctions driven by a source of current (for more
58
1 Synchronization of Global Attractors and Individual Trajectories
details, see Sect. 1.6.6). The corresponding model is also a PDE version of identical active rotators given in (1.1.1). ut − Δ u + β u + κ(u − v) = −λ sin u + g(x), vt − Δ v + β v + κ(v − u) = −λ sin v + g(x),
(1.4.6)
in a smooth domain O ⊂ Rd and (for the sake of definiteness) equipped with Neumann boundary conditions
∂ u ∂ v = 0, = 0. ∂ n ∂O ∂ n ∂O Let g ∈ L2 (O), λ , κ > 0 and β > 0 It is easy to see that in the case when β > 0 the general theory developed above can be applied. In the case β = 0, the situation is more complicated because the corresponding operator A˜ is −Δ on the domain % ∂u 2 ˜ D(A) = u ∈ H (O) : = 0 on ∂ O ∂n and thus λ0 = 0 is an eigenvalue, i.e., A˜ becomes degenerate. Therefore, we concentrate on the case β = 0. Since the nonlinearities in (1.4.6) are globally Lipschitz, Theorem 1.3.3 is applied here and the above problem has a unique mild solution. It is convenient to introduce new variables w=
u+v u−v and z = . 2 2
(1.4.7)
In these variables, the problem in (1.4.6) with β = 0 can be written in the form wt − Δ w + 2κw + λ sin w cos z = 0, zt − Δ z + λ cos w sin z = g(x), ∂ w ∂ z = 0, = 0. ∂ n ∂O ∂ n ∂O
(1.4.8)
The main linear part in the first equation of (1.4.8) is not degenerate when κ > 0. Thus, using the multipliers w and wt we obtain that 1d w2 + ∇w2 + 2κw2 ≤ λ w2 2 dt and d ∇w2 + 2κw2 ≤ λ 2 w2 . dt If κ ≥ 1 + λ , then
2κw2 ≥ κw2 + (1 + λ )w2 ;
1.4 Reaction–Diffusion Systems
59
hence, these relations imply that μ d ∇w2 + 2κw2 + w2 + μ ∇w2 + κw2 + (μ − λ 2 )w2 ≤ 0. dt 2 Thus, for μ = λ 2 we can choose κ∗ = κ∗ (λ ) > 0 and γ∗ > 0 such that w(t)2H 1 (O) ≤ Ce−2γ∗ t w(0)2H 1 (O) , t ≥ 0, for all κ ≥ κ∗ (λ ). Therefore, we observe the asymptotic synchronization in system (1.4.6). Moreover, it follows from the reduction principle14 (see Chueshov [40, Section 2.3.3]) that the limiting (synchronized) dynamics is determined by the single equation ∂ z zt − Δ z + λ sin z = g(x), = 0. ∂ n ∂O Another coupled sine-Gordon system of interest is the following: ut − Δ u + a sin u = −λ sin(u − v) + g1 (x), vt − Δ v + a sin v = −λ sin(v − u) + g2 (x),
(1.4.9)
u|∂ O = 0, v|∂ O = 0. This is a PDE version of the coupled active rotators considered in Example 1.3.7. Formally, this model is outside the scope of the theory developed above. However, using the ideas presented we can answer some questions concerning synchronized regimes. The ODE version of this model was studied in Rodrigues/Alberto/Bretas [141]. We assume that gi ∈ L2 (O), λ , a ≥ 0. If a = 0, then in variables w and z given by (1.4.7) we have equations wt − Δ w + λ sin 2w = g(x), zt − Δ z = h(x), w|∂ O = 0, z|∂ O = 0,
(1.4.10)
where g(x) = (g1 (x) − g2 (x))/2 and h(x) = (g1 (x) + g2 (x))/2. One can see that ∇(z(t) − z∗ )2 ≤ C∇(z(0) − z∗ )2 e−ω t , t ≥ 0, for some C, ω > 0, where z∗ ∈ H 1 (O) solves the Dirichlet problem −Δ z = h(x), z|∂ O = 0.
14 This principle makes it possible to reduce the studies of the long-time dynamics of the system on a globally attracting invariant set that is usually “smaller” than the complete phase space.
60
1 Synchronization of Global Attractors and Individual Trajectories
The dynamical system generated by the first equation of (1.4.10) equipped with the Dirichlet boundary conditions possesses a global attractor A in the space W = H01 (O). This follows from an uncoupled version of Theorem 1.3.12. A similar effect can be seen for the problem in (1.4.9) with positive sufficiently small a. Hence, % ψ z(t) − z∗ w(t) u(t) − z∗ = + : ψ ∈ A as t → +∞ −→ v(t) − z∗ z(t) − z∗ −ψ −w(t) in the space W . Thus, we observe some kind of shifted asymptotic anti-phase synchronization. As in Remark 1.3.22, we can consider a system of N coupled sine-Gordon equations of the form ut1 − νΔ u1 + κ(u1 − u2 ) = −λ1 sin(u1 + α1 ) + g1 (x), utj − νΔ u j − κ(u j+1 − 2u j + u j−1 ) = −λ j sin(u j + α j ) + g j (x), j = 2, . . . , N − 1,
(1.4.11)
utN − νΔ uN + κ(uN − uN−1 ) = −λN sin(uN + αN ) + gN (x), on a smooth bounded domain O ⊂ Rd with the Neumann boundary conditions
∂ u j = 0, ∂ n ∂O and initial data
j = 1, . . . , N.
u j (0) = u0j , utj (0) = u1j , j = 1, . . . , N.
Here, U = (u1 , . . . , uN ) is an unknown function, ν > 0, κ ≥ 0, λ j > 0, and α j are real parameters and g j are given functions. The corresponding ODE model was discussed in Qian/Zhu/Qin [133].
1.4.3 A Model from Chemical Kinetics: Cross-Diffusion In this section we consider a reaction–diffusion system with cross-diffusion terms that arise in many situations (see, for example, Hansen/McDonald [90] and Tyrrell/Harris [164]). As a model we consider a system of two coupled PDEs of the form ut − d11 Δ u − d12 Δ v − λ u = f˜1 (u) + g1 (x), vt − d21 Δ u − d22 Δ v − λ v = f˜2 (v) + g2 (x) endowed with boundary and initial conditions ∂ u ∂ v = = 0, ut=0 = u0 , vt=0 = v0 . ∂ n ∂O ∂ n ∂O
(1.4.12)
(1.4.13)
1.4 Reaction–Diffusion Systems
61
We analyze the situation when the diffusion coefficients have the form d11 = ν1 + κ, d22 = ν2 + κ, d12 = d21 = −κ, with νi > 0 and κ ≥ 0. Thus, we consider the case when the cross-diffusion coefficients d12 and d21 are negative (this effect can be observed for some classes of physicochemical systems (see the survey Vanag/Epstein [165]). Generation of a semiflow and results on long-time dynamics and attractors can be found in Babin/Vishik [7]. For this we assume that • f˜j ∈ C1 (R), f˜i (0) = 0, and − s f˜i (s) ≥ μ0 |s|r+1 , | f˜i (s)| ≤ μ1 |s|r +C with 1 < r ≤ 5.
(1.4.14)
• sups∈R (− f˜i (s)) < ∞. • g1 , g2 ∈ L2 (O). Under these conditions (see Theorem 1.5.1 in Babin/Vishik [7]) The problem in (1.4.12) generates a dynamical system φ in the space X 1/2 = H 1 (O) × H 1 (O). This system possesses an absorbing set in % ∂ u ∂ v 1 2 2 X = (u, v) ∈ H (O) × H (O) : = =0 . ∂ n ∂O ∂ n ∂O This implies that φ possesses a global attractor for each κ > 0 and λ ≥ 0 (see Babin/Vishik [7, Theorem 15.1]). To draw the conclusion on synchronization we need uniform in κ bounds for the attractor. We first multiply equations in (1.4.12) by u and v in L2 (O). This gives 1d 2 u + v2 + ν1 ∇u2 + ν2 ∇v2 + κ∇(u − v)2 2 dt μ0 |u|r+1 + |v|r+1 dx ≤ Cg,λ . + 2 O
(1.4.15)
Using the multipliers −Δ u and −Δ v we obtain that 1d ∇u2 + ∇v2 + ν1 Δ u2 + ν2 Δ v2 2 dt f˜ (u)|∇u|2 + f˜ (v)|∇v|2 dx + κΔ (u − v)2 −
O
= λ ∇u2 + ∇v2 − (g1 , Δ u) − (g2 , Δ v) .
This implies that ν1 1d ν2 ∇u2 + ∇v2 + Δ u2 + Δ v2 2 dt 2 2 ≤ (λ + c f˜) ∇u2 + ∇v2 +Cg .
(1.4.16)
62
1 Synchronization of Global Attractors and Individual Trajectories
The next multipliers are ut and vt . They give us 1d ν1 ∇u2 + ν2 ∇v2 + κ∇(u − v)2 + Π1 (u) + Π2 (v) 2 dt + ut 2 + vt 2 ≤ c0 (1 + λ ) u2 + v2 +Cg .
(1.4.17)
We note that from the first relation in (1.4.14) we have that
Πi (s) := −
s 0
μ0 |s|r+1 , ∀ s ∈ R. f˜i (ξ )d ξ ≥ r+1
Relations (1.4.15) and (1.4.16) imply that 1d η [u2 + v2 ] + ∇u2 + ∇v2 2 dt η μ0 r+1
|u| + |v|r+1 dx + (η − c f ,λ ) ν1 ∇u2 + ν2 ∇v2 + 2 O ν1 ν2 2 + Δ u + Δ v2 + 2κΔ (u − v)2 ≤ Cg,λ ,η 2 2 for every η > 0. We also know that 1 1 (u2 + v2 ) + 2 2
O
|u|r+1 + |v|r+1 dx ≤ c + |u|r+1 + |v|r+1 dx. O
This implies for correctly chosen constants C1 , C2 , η and γ that we can apply a Gronwall lemma argument (Theorem 1.2.24) such that u(t)2H 1 (O) + v(t)2H 1 (O) ≤ u(0)2H 1 (O) + v(0)2H 1 (O) e−γ t +C2 . Integrating the above differential inequality between t and t + 1, we obtain u(t)2H 1 (O) + v(t)2H 1 (O)
t+1 t+1 |u|r+1 + |v|r+1 d τ + κ + Δ (u − v)2 d τ Δ u2 + Δ v2 + O t t ≤ C1 u(0)2H 1 (O) + v(0)2H 1 (O) e−γ t + C˜2 , (1.4.18) where C1 , C˜2 and γ do not depend on κ. Using (1.4.15) and (1.4.17) in a similar way, we obtain u(t)2H 1 (O) + v(t)2H 1 (O) +
O
|u|r+1 + |v|r+1 dx t+1
ut 2 + vt 2 d τ t 2 2 ≤ C1 u(0)H 1 (O) + v(0)H 1 (O) + κ∇(u(0) − v(0))2 e−γ t +C2 . + κ∇(u(t) − v(t))2 +
(1.4.19)
1.4 Reaction–Diffusion Systems
63
In particular, (1.4.18) and (1.4.19) imply that for any trajectory (u(t), v(t)) from the attractor we have the following uniform bound u(t)2H 1 (O) + v(t)2H 1 (O) + κ∇(u(t) − v(t))2 +
t+1 t
ut 2 + vt 2 + Δ u2 + Δ v2 + κΔ (u − v)2 d τ ≤ C
for all t ∈ R, where C does not depend on κ. This observation allows us to prove the following assertion. Theorem 1.4.3 (Synchronization). Under the conditions above, the global attractor Aκ of the system φ generated in the space X 1/2 = H 1 (O) × H 1 (O) by the problem (1.4.12) possesses the asymptotic synchronization property dX 1/2 {Aκ , A∞ } → 0 when κ → +∞, where A∞ = {(w, w) : w ∈ A∗ } and A∗ is the global attractor of the system generated by wt −
ν1 + ν2 ∂ v 1 1 Δ w = λ w + ( f˜1 (w) + f˜2 (w)) = (g1 + g2 ), = 0. 2 2 2 ∂ n ∂O
If the subsystems in (1.4.12) are identical, then there exists κ∗ > 0 such that Aκ = A∞ for all κ ≥ κ∗ . Applying the ideas of the proof for Theorem 1.3.20 we obtain the conclusion.
1.4.4 Coupling in the Transmission of Nerve Impulses: Hodgkin–Huxley Model We consider two identical nerve impulse equations, coupled only through the electric potential of each cell. The type of coupling studied corresponds to an electrical synapse, where the ionic channels do not intervene directly. More precisely, we consider the two coupled identical systems of PDE proposed by Hodgkin/Huxley [94] to describe the mechanism of nerve impulse transmission: ut − d0 ∂x2 u + κ(u − v) + g(u, η1 , η2 , η3 ) = 0,
t > 0,
(1.4.20a)
j = 1, 2, 3
(1.4.20b)
vt − d0 ∂x2 v + κ(v − u) + g(v, ψ1 , ψ2 , ψ3 ) = 0,
(1.4.20c)
η jt − d j ∂x2 η j + k j (u) · (η j − h j (u)) =
0,
and
ψ jt − d j ∂x2 ψ j + k j (v) · (ψ j − h j (v)) =
0,
j = 1, 2, 3
(1.4.20d)
64
1 Synchronization of Global Attractors and Individual Trajectories
for x ∈ (0, L). The equations are equipped with the Neumann boundary conditions15
∂x u|x=0 = ∂x u|x=L = d j · ∂x η j |x=0 = d j · ∂x η j |x=L = 0, and
∂x v|x=0 = ∂x v|x=L = d j · ∂x ψ j |x=0 = d j · ∂x ψ j |x=L = 0. We equip the equations with initial data u|t=0 = u0 (x), v|t=0 = v0 (x),
η j |t=0 = η j0 (x), ψ j |t=0 = ψ j0 (x).
(1.4.20e)
Here, d0 > 0, d j ≥ 0 and g(u, η1 , η2 , η3 ) = −γ1 η13 η2 (δ1 − u) − γ2 η34 (δ2 − u) − γ3 (δ3 − u), where γ j > 0 and δ1 > δ3 > 0 > δ2 . Furthermore, we assume that k j (u) and h j (u) are given C1 functions satisfying k j (u) > 0 and 0 < h j (u) < 1, j = 1, 2, 3. In this model, u and v represent the electric potentials in two cells, whereas η j and ψ j represent the corresponding chemical concentrations that may range between 0 and 1. The uncoupled version (κ = 0) of the system in (1.4.20) is a set of two standard identical versions of the Hodgkin–Huxley equations. Thus, in this uncoupled case we can apply the result available from many sources (see, for example, Hassard/Kazarinoff/Wan [91], Henry [92], Smoller [156], Temam [161] and the literature quoted there). The numerical simulation presented in Hassard/Kazarinoff/Wan [91] shows that the long-time behavior of solutions can be rather complicated. We also note that in the ODE case (d j = 0 for for all j = 0, 1, 2, 3) it has been proved by Labouriau and Rodrigues [102] that for large enough positive values of the coupling constant κ, all the solutions asymptotically synchronize. We now describe some properties of solutions to (1.4.20). Below, we denote by U0 (x) = (u0 (x), η10 (x), η20 (x), η30 (x)) and V0 (x) = (v0 (x), ψ10 (x), ψ20 (x), ψ30 (x)) the set of initial data and by U(t) = (u(x,t), η1 (x,t), η2 (x,t), η3 (x,t)) and V (t) = (v(x,t), ψ1 (x,t), ψ2 (x,t), ψ3 (x,t)) the corresponding solutions to (1.4.20). We start with the following result on invariance.
We do not impose boundary conditions for the function η j (x,t) and ψ j (x,t) when the corresponding diffusion coefficients d j vanish for j = 1, 2, 3.
15
1.4 Reaction–Diffusion Systems
65
Proposition 1.4.4. Let D = {U := (u, ξ1 , ξ2 , ξ3 ) : δ2 ≤ u ≤ δ1 , 0 ≤ ξ j ≤ 1, j = 1, 2, 3} ⊂ R4 . Assume that initial data (1.4.20e) possess the property U0 (x) ∈ D and V0 (x) ∈ D for almost every x ∈ [0, L], Then the set D2 := D × D ⊂ R8 is a forward invariant domain for (1.4.20). This means that Y (t) = (U(t),V (t)) ∈ D2 for x ∈ [0, L] and for all t > 0 for which this solution Y exists. Proof. In the uncoupled case κ = 0, this result is reported in Smoller [156, p. 208]. The idea of the proof is based on checking that the vector field defined by the reaction terms point strictly into the corresponding domain. In the general case (κ > 0), this reaction vector field is perturbed by the term κK(u, v) = (κ(u − v), 0, 0, 0, κ(v − u), 0, 0, 0). One can see that for positive κ, this field κK(u, v) does not change the directional property of the field with κ = 0. Indeed, −κ(δ2 − v) ≥ 0,
−κ(δ1 − v) ≤ 0
for δ2 ≤ v ≤ δ1 and similar for the equation for v when we exchange the role of u. This implies the invariance of D2 for every κ > 0. Let H0 := [L2 (0, L)]8 be endowed with the standard product norm and H0 (D2 ) = {Y (x) ∈ H0 : Y (x) ∈ D2 , for almost all x ∈ (0, L)}. We also use the notations H1 = [V1 ]8 and H2 = [V2 ]8 , where V1 = H 1 (0, L) and V2 = {H 2 (0, L) : ∂x u|x=0,x=L = 0} Here and below H i (0, L) is the Sobolev–Slobodeckij space of order i on (0, L). We denote by · i the norm in H i (0, L): u2i = ∂xi u2 + u2 =
L 0
(|∂xi u(x)|2 + |u(x)|2 ) dx,
i = 1, 2, . . . ,
and by · and (·, ·) the norm and the inner product in L2 (0, L): u2 =
L 0
|u(x)|2 dx
L
and
(u, v) =
u(x)v(x) dx. 0
66
1 Synchronization of Global Attractors and Individual Trajectories
Theorem 1.4.5 (Well-Posedness). Let d j > 0 for all j = 1, 2, 3. Then, for every Y0 = (U0 ,V0 ) ∈ H0 (D2 ) the problem in (1.4.20) has a unique solution Y (t) = (U(t),V (t)) ∈ H0 (D2 ) for all t ≥ 0. This solution possesses the property Y ∈ C([0, T ], H0 (D2 )) ∩ L2 (0, T, H1 ) for any interval [0, T ]. If Y0 ∈ H0 (D2 ) ∩ H1 we additionally have Y ∈ C([0, T ], H0 (D2 ) ∩ H1 ) ∩ L2 (0, T, H2 ).
(1.4.21)
Thus, we can define the dynamical system φ in the space H0 (D2 ) ∩ H1 according to the formula φ (t,Y0 ) = Y (t), where Y (t) = (U(t),V (t)) is the solution to the problem in (1.4.20) with the initial data Y0 = (U0 ,V0 ). For the case d1 = d2 = d3 = 0, the corresponding dynamical system can be defined in the space V1 = H1 ∩ H0 (D2 ). In this case, for any interval [0, T ], we have
φ (t,Y0 ) = Y (t) ∈ C([0, T ], V1 ) ∩ L2 (0, T,V2 × (H 1 (0, L))3 ×V2 × (H 1 (0, L))3 ) (1.4.22) provided that Y0 ∈ V1 . Proof. The argument for the first statement is basically the same as in Henry [92] and Temam [161]. We do not repeat it here. The second statement can be easily obtained by the application of general methods presented in Henry [92]. Our main result in this section is the following assertion. Theorem 1.4.6. Let the situation of Theorem 1.4.5 be in force. Let
β j (A j + B j ) 2k∗j j=1 3
κ ∗ = ( δ1 − δ2 ) · ∑
(25)
with β1 = 3γ1 , β2 = γ1 , β3 = 4γ2 and A j = max{|∂u k j (u)| : δ2 ≤ u ≤ δ1 }, B j = max{|∂u (k j h j )(u)| : δ2 ≤ u ≤ δ1 }, k∗j = min{k j (u) : δ2 ≤ u ≤ δ1 }. Then, for every κ > κ∗ − γ3 /2, we observe asymptotic synchronization in System (1.4.20) in the sense that ! " lim eqt
t→∞
3
u(t) − v(t) 2 + ∑ η j (t) − ψ j (t) 2
=0
j=1
for some q > 0 and for every solution Y (t) = (U(t),V (t)) lying in D2 .
(1.4.23)
1.4 Reaction–Diffusion Systems
67
Theorem 1.4.6 means that the synchronization can be achieved by large coupling, which involves the electric potentials u(x,t) and v(x,t) only. Proof. Let Y (t) = (U(t),V (t)) be a solution to the problem in (1.4.20), possessing either property (1.4.21) with d j > 0 or property (1.4.22) with d j = 0, j = 1, 2, 3. It is clear that G(U,V ) :=g(u, η1 , η2 , η3 ) − g(v, ψ1 , ψ2 , ψ3 ) =(γ1 η13 η2 + γ2 η34 + γ3 )(u − v) + h(U,V ), where h(U,V ) = −γ1 (η13 η2 − ψ13 ψ2 )(δ1 − v) − γ2 (η34 − ψ34 )(δ2 − v). For the last term, we notice that
η34 − ψ34 = (η32 + ψ32 )(η3 + ψ3 )|η3 − ψ3 | ≤ 4|η3 − ψ3 |. On the other hand, the first bracket can be estimated by |η13 − ψ13 |η2 + ψ13 |η2 − ψ2 | ≤ 3|η1 − ψ1 | + |η2 − ψ2 |. Since U,V ∈ D, it is clear that |h(U,V )| ≤
3
∑
a j |η j − ψ j |,
j=1
where a j = (δ1 − δ2 ) · β j . Let w = u − v and ξ j = η j − ψ j , j = 1, 2, 3. From (1.4.20a) and (1.4.20c) we have
∂t w − d0 ∂x2 w + 2κw + G(U,V ) = 0.
(1.4.24)
Multiplying (1.4.24) by w ∈ L2 (0, L), we can easily find that 1 d w2 + d0 ∂x w2 + 2κw2 + γ3 w2 ≤ 2 dt
3
∑ a j ξ j · w.
(1.4.25)
j=1
From (1.4.20b) and (1.4.20d) we also obtain 1 d ξ j 2 + d j ∂x ξ j 2 + k∗j ξ j 2 ≤ (A j + B j )ξ j · w. 2 dt So, for any 0 < ε < 1 and for any θ j > 0, j = 1, 2, 3, we have 1d 2 dt
3
w2 + ∑ θ j ξ j 2 + d0 ∂x w2 + (γ3 + 2κ)w2 + ε j=1
3
∑ k∗j θ j ξ j 2
j+1
68
1 Synchronization of Global Attractors and Individual Trajectories 3 ≤ − ∑ (1 − ε )k∗j θ j ξ j 2 − [a j + θ j (A j + B j )]ξ j · w j=1
≤
[a j + θ j (A j + B j )]2 · w2 . 4(1 − ε )θ j k∗j j=1 3
∑
Consequently, (1.4.26) gives 3 1 d w2 + ∑ θ j ξ j 2 + ω (ε , θ )w2 + ε 2 dt j=1
(1.4.26)
3
∑ k∗j θ j ξ j 2 ≤ 0.
(1.4.27)
j+1
for any 0 < ε < 1, θ j > 0, where [a j + θ j (A j + B j )]2 . 4(1 − ε )θ j k∗j j=1 3
ω (ε , θ ) = γ3 + 2κ − ∑
If we choose θ j = a j · (A j + B j )−1 , we obtain
ω (ε , θ ) = γ3 + 2κ − 2κ∗ (1 − ε )−1 . It is easy to see that there exist 0 < ε < 1 and θ > 0 such that ω (ε , θ ) > 0 provided that κ > κ∗ − γ3 /2 holds. Therefore, (1.4.27) implies (1.4.23). This completes the proof of Theorem 1.4.6.
1.4.5 Coupling on the Boundary In a bounded smooth domain O ⊂ Rd we consider the following parabolic equations ut − Δ u + au = f˜1 (u, ∇u), vt − Δ v + av = f˜2 (v, ∇v) coupled via the boundary ∂u + κ(u − v) = 0, ∂n ∂O
(1.4.28)
∂v + κ(v − u) = 0, ∂n ∂O
and endowed with initial conditions ut=0 = u0 , vt=0 = v0 . Here, a > 0 and κ ≥ 0 are parameters and f˜i are functions as in Sect. 1.4.1. As we can see in this model, interaction between the components is possible on the boundary only.
1.4 Reaction–Diffusion Systems
69
We rewrite the problem in the form (1.3.37). For this we introduce the following Neumann operator N : L2 (∂ O) → L2 (O) by the formula ∂ u = ψ. N ψ = u if and only if − Δ u + au = 0 in O, ∂ n ∂ O It is known (see Lions/Magenes [111] or Lasiecka/Triggiani [104]) that for s ∈ R+ N : H s (∂ O) → H s+3/2 (O).
(1.4.29)
For the definition of the Sobolev–Slobodeckij spaces H s (∂ O), H s+3/2 (O) we refer the reader to Egorov/Shubin [70]. If we denote % ∂ u 2 ˜ ˜ A = −Δ + aI with D(A) = u ∈ H (O) : =0 , ∂ n ∂ O then by the Green formula we have (see, e.g., Lasiecka/Triggiani [104]) ˜ = h , h ∈ D(A). ˜ N ∗ Ah ∂O
This relation can be extended to the space H 1 (O) = D(A˜ 1/2 ). With these observations, the problem in (1.4.28) can be written in that form
ut + A˜ u − κN γ [u − v] = f˜1 (u, ∇u),
vt + A˜ v + κN γ [u − v] = f˜2 (v, ∇v), where γ : H s (O) → H s−1/2 (∂ O), s > 1/2 is the trace operator, γ [u] = u∂ O . Therefore, in the scale H s generated by the operator A˜ the problem can be written in the ˜ In this case, form of (1.3.37) with F˜i (u) = f˜i (u, ∇u) and A˜ 1 = A˜ 2 = A. ˜ γ [u]. K1 = K2 = κK0 with K0 u = AN We know from (1.4.29) and from the trace theorem (see Lions/Magenes [111]) that ˜ 1+s (O) ⊂ X 3/4−δ = AX ˜ −1/4−δ K0 : H s (O) → AH 1 1 for every 1/2 < s ≤ 1 and δ > 0. Since X1σ = H 2σ (O) for 0 ≤ σ < 3/4, we have −1/4−δ
K0 : X1σ → X1
, ∀ δ > 0, 1/4 < σ ≤ 1/2.
Applying (1.4.29), we have
(K0 u, v) =
∂O
1/2
γ [u] · γ [v]dΓ , u, v ∈ X1 .
(1.4.30)
70
1 Synchronization of Global Attractors and Individual Trajectories 1/2
Thus, K0 generates nonnegative symmetric form on X1 . This allows us to apply the previous abstract results (see Sect. 1.3.5) with α = 1/2 and β = 1/4 + δ with arbitrary small δ > 0. Theorem 1.4.7. Let the conditions (1.4.2) concerning f˜i (s, ξ ) be in force. Then the problem in (1.4.28) generates a dynamical system φ κ in the space X 1/2 = H 1 (O) × H 1 (O) for each κ ≥ 0. Moreover, for φ κ (t,U0 ) = (u(t), v(t)) the following energy balance relations hold:
t 1 u(t)2 + v(t)2 + A˜ κ (u(τ ), v(τ )) d τ 2 0 t 1 ( f˜1 (u, ∇u), u) + ( f˜2 (v, ∇v), v) d τ + u(0)2 + v(0)2 (1.4.31) = 2 0
and 1˜ Aκ (u(t), v(t)) + Π1 (u(t)) + Π2 (v(t)) 2 t t ut 2 + vt 2 d τ − ( fˇ1 (u, ∇u), ut ) + ( fˇ2 (v, ∇v), vt ) d τ + 0
0
1 = A˜ κ (u(0), v(0)) + Π1 (u(0)) + Π1 (v(0)) 2 where Πi (u) is given by −
u(x) O
f0i (ξ ) d ξ dx
0
and A˜ κ (u, v) = ∇u2 + ∇v2 + au2 + av2 + κ
where
∂O
|u − v|2 dΓ =
∂O
∂O
|u − v|2 dΓ
|γ [u] − γ [v]|2 dΓ
(see 1.4.30). We also have the following assertions. • There exists a global attractor Aκ of finite fractal dimension. • The attractors Aκ are upper semicontinuous for every κ ∈ [0, +∞). • Let f˜1 = f˜2 and %
2 2 2 1 |∇w| + a|w| dO + κ μκ = inf |w| dΓ : w ∈ H (O), w = 1 O
∂O
which is the lower bound of the symmetric form given by A˜ κ . Then, under the condition μκ > L f˜1 , where L f˜1 is such that ( f˜1 (u, ∇u) − f˜1 (v, ∇v), u − v) ≤ L f˜1 u − v∇u − ∇v ≤ L f˜1 u − v2H 1 (O)
1.5 A Case Study: Two-Layer Problem in Thin Domains
71
we observe synchronization for fixed κ, i.e., there exists κ∗ > 0 such that Aκ = {(w, w) : w ∈ A∗ } for all κ ≥ κ∗ , where A∗ is the global attractor of the system generated by
∂ w wt − Δ w + aw = f˜1 (w, ∇w), = 0. ∂ n ∂O Proof. The well-posedness statement follows from Theorem 1.3.11. Indeed, we have seen in Sect. 1.4 that the above nonlinearity can be embedded into the general situation formulated in 1.3.11. To prove the existence of a global finite-dimensional attractor we use Theorem 1.3.12. Proposition 1.3.14 yields the semicontinuity of Aκ for every fixed κ. The final statement on synchronization follows from Theorem 1.3.17 and Remark 1.3.18. We note that the condition concerning μκ requires this parameter to be large enough. This property can be achieved at the expense of the appropriate geometry of the domain. This effect can be seen in the following section in a reaction–diffusion model in a two-layer thin domain. Some results based on classical solutions and the maximum principle for the boundary synchronization in the problem in (1.4.28) can also be found in Carvalho/Primo [24].
1.5 A Case Study: Two-Layer Problem in Thin Domains In this section we study synchronization phenomena of reaction–diffusion processes in two-layer thin films interacting via surfaces. More precisely, we consider a semilinear parabolic equation on a union of two thin bounded tube domains joined at the common base Γ . Unknown functions are coupled by some interface condition on Γ . This problem can model a reaction–diffusion system of two components reacting at the interface. The reaction intensity k(x, ε ) depends on ε (i.e., on the domain’s thickness). We study the possibility of synchronization in the global attractor of the corresponding dynamical system, as ε → 0 (i.e., as the initial domain is getting thinner). The results depend crucially on the behavior of the intensity k(x, ε ) as ε → 0 and completely different scenarios are possible. Let O1,ε and O2,ε be bounded domains in R3 of the form O1,ε = Γ × (0, ε ),
O2,ε = Γ × (−ε , 0),
where 0 < ε ≤ 1 and Γ is a bounded C2 -domain in R2 . We do not distinguish notations of the sets Γ × {0} ⊂ R3 and Γ ⊂ R2 and use the notation x = (x , y) for x ∈ Oi,ε , where x ∈ Γ and y ∈ (0, ε ) or y ∈ (−ε , 0).
72
1 Synchronization of Global Attractors and Individual Trajectories
We deal with the following system of semilinear parabolic equations
∂t wi − νi Δ wi + awi = fi (wi ),
t > 0, x ∈ Oi,ε , i = 1, 2,
a>0
(1.5.1)
with the initial data wi (0, x) = wi,0 (x),
x ∈ Oi,ε , i = 1, 2.
We assume that w1 and w2 satisfy the Neumann boundary conditions (∇wi , n) = 0,
x ∈ ∂ Oi,ε \Γ ,
i = 1, 2,
on the external part of the boundary of the compound domain Oε = O1,ε ∪ O2,ε , where n is the outer normal to ∂ Oε , and a fitting condition on Γ of the form −ν1 ∂∂wy1 + k(x , ε )(w1 − w2 ) = 0, Γ ν2 ∂∂wy2 + k(x , ε )(w2 − w1 ) = 0. Γ
In (1.5.1), the parameters νi and a are positive numbers and f˜i (v) satisfies the conditions fi ∈ C2 (R),
| fi (s)| ≤ M(1 + |s|),
lim sup s→∞
and ∃ δ > 0,C ≥ 0 : fi (s)s − δ
s 0
fi (s) ≤ 0, s
i = 1, 2,
fi (ξ )d ξ ≤ C.
(1.5.2)
(1.5.3)
We also assume that k(x , ε ) ∈ L∞ (Γ ),
k(x , ε ) > 0 for x ∈ Γ , ε ∈ (0, 1].
(1.5.4)
The problem in (1.5.1) is a model for a reaction–diffusion system consisting of two components filling thin contacting layers O1,ε and O2,ε separated by a penetrable film Γ . A reaction of the components is possible on the surface Γ only. The reaction intensity depends on the thickness of the domains filled by the reactants and is described by the coefficient k(x , ε ). Our goal is to study limiting properties of the problem in (1.5.1) as the initial domains become thin, i.e., ε → 0. Our results depend crucially on the asymptotic properties of the intensity k(x , ε ). Below, we assume that there exists a measurable set Γ∗ in Γ such that ! k0 (x ), x ∈ Γ0 , weakly in L2 (Γ0 ), 1 lim k(x , ε ) = (1.5.5) ε →0 ε +∞, x ∈ Γ∗ , in Lebesgue measure,
1.5 A Case Study: Two-Layer Problem in Thin Domains
73
where Γ0 = Γ \ Γ∗ and k0 (x ) ≥ 0 is a bounded measurable function on Γ0 . The convergence in (Lebesgue) measure to infinity means that lim Leb x ∈ Γ∗ : ε −1 k(x , ε ) ≤ N = 0 for any N > 0. (1.5.6) ε →0
The main case that we keep in mind is k(x , ε ) = ε α k0 (x ), where k0 is a positive smooth function and α is a real number. We note that Hale/Raugel [87, 88] were the first to study the asymptotic dynamics of semilinear reaction–diffusion equations on thin domains. Some extensions of their results can also be found in Ciuperca [60] and Prizzi/Rybakowski [132]. In all these papers, a reaction–diffusion equation is endowed with homogeneous Neumann boundary conditions. The problem in (1.5.1) was considered in Rekalo [136] in the case when ν1 = ν2 and k(ε , x ) ≡ k(x ) does not depend on ε . (See also Chueshov/Rekalo [50, 137] for some other cases.)
1.5.1 Dynamics for the Fixed Thickness ε The problem in (1.5.1) can be written in the abstract form wt + Aε w = F(w), w|t=0 = w0 , in the space Xε = L2 (Oε ), where Aε is the positive operator with a discrete spectrum (see the definition in Assumption 1.3.9) generated by the following bilinear form 2
aε (u, v) =
∑
i=1
νi (∇x ui , ∇x vi )L2 (Oi,ε ) + a · (ui , vi )L2 (Oi,ε )
+
Γ
k(x , ε )(u1 (x , 0) − u2 (x , 0))(v1 (x , 0) − v2 (x , 0)) dx . 1/2
Here u = (u1 , u2 ) and v = (v1 , v2 ) belong to Xε := H 1 (O1,ε ) ⊕ H 1 (O2,ε ) ⊂ Xε and a > 0. We also suppose % F (u ) f1 (u1 ) ε > y > 0 : u1 ∈ H 1 (O1,ε ) 1/2 [F(u)](x) = 1 1 = , u= ∈ Xε . F2 (u2 ) f2 (u2 ) −ε < y < 0 : u2 ∈ H 1 (O2,ε ) Thus, F is a potential operator, F(u) = −Π (u), where
Π (u) = Π1 (u1 ) + Π2 (u2 ),
1/2 u = u1 ⊕ u2 ∈ Xε .
Πi (v) =
Oi,ε
−
0
v(x)
fi (ξ )d ξ dx.
74
1 Synchronization of Global Attractors and Individual Trajectories
Owing to (1.5.2) and (1.5.3) the relations in (1.3.27) and (1.3.26) are satisfied for this Π (u) for A˜ 1ε 0 Aε = 0 A˜ 2ε where A˜ iε is generated by −νi Δ . By the standard method we can prove the following assertion. 1/2
Proposition 1.5.1. In the space Xε , the problem in (1.5.1) generates a dynamical system φ ε possessing a global attractor Aε , which belongs to the space H 2 (O1,ε )⊕H 2 (O2,ε ). The semigroup φ ε is defined by the formula φ ε (t, w1,0 , w2,0 ) = (w1 (t), w2 (t)), where the pair (w1 (t), w2 (t)) solves the problem (1.5.1). Moreover, on the attractor Aε we have the following uniform estimate
ε −1 aε (u, u) + ||Aε u2 ≤ R2 , ∀ u ∈ Aε , (1.5.7) where R does not depend on ε . Proof. With reference to Remark 1.3.13, we apply the results stated in Theorem 1.3.11 and Theorem 1.3.12. Indeed, we have a single equation model as in (1.3.31). This implies well-posedness and the existence of a global attractor. To prove H 2 -regularity of the attractor, we use the following assertion, which is also important for our further analysis. Proposition 1.5.2. Let Uε (r) = {u ∈ Xε : ε −1 aε (u, u) ≤ r}. Then, for any r > 0 and ε ∈ (0, 1] we have the following relations
sup sup ε −1 aε (φ ε (t, u), φ ε (t, u)) : u ∈ Uε (r) ≤ c(r), (1.5.8) 1/2
t≥0
ε −1/2t 1/2 sup{Aε φ ε (t, u) : u ∈ Uε (r)} ≤ c(r)(1 + t)1/2 , and
ε −1
t 0
∂t φ ε (τ , u)2L2 (Oε ) d τ ≤ c(r),
u ∈ Uε (r), t > 0.
(1.5.9) (1.5.10)
Moreover, there exist R > 0 (independent of r) and t = t(r) such that %
t+1 1 ε ε ε 2 sup ∂τ φ (τ , u)L2 (Oε ) d τ : u ∈ Uε (r) ≤ R2 aε (φ (t, u), φ (t, u)) + ε t (1.5.11) for all t ≥ t(r). Proof. The argument involves the standard multipliers (see, for example, Babin/Vishik [7] or Temam [161]). We only sketch them (for details of the problem under consideration we refer the reader to Chueshov/Rekalo [50, 137]).
1.5 A Case Study: Two-Layer Problem in Thin Domains
75
First of all, we notice that every strong solution uε (t) satisfies for t > 0 the relations 1 ∂t uε (t)2Xε + aε (uε (t), uε (t)) = (F(uε (t)), uε (t))Xε (1.5.12) 2 and ∂t Eε (uε (t)) + ∂t uε (t)2Xε = 0, (1.5.13) where
2 1 Eε (u) = aε (u, u) + ∑ Πi (ui ) 2 i=1
(1.5.14)
Lemma 1.5.3. There are positive constants c0 , c1 and K independent of ε ∈ (0, 1] such that 1 1 2 2 (1.5.15) c0 aε (u, u) ≤ uXε + Eε (u) + ε K ≤ c1 ε + [aε (u, u)] 2 ε 1/2
for every u ∈ Xε . Proof. The left side estimate is obviously valid with appropriate c0 and K. To show the right hand bound we note that the second relation in (1.5.2) implies that u(x ,y) dx dy ≤ c1 f ( ξ )d ξ |u|4 dx dy + c2 ε . i Oi,ε
Oi,ε
0
By changes of variables and by the embedding H 1 (Oi,1 ) ⊂ L4 (Oi,1 ) one can also see that
|u(x , y)|4 dx dy =ε
Oi,ε
≤Cε
Oi,1
|u(x , ε z)|4 dx dz
Oi,1
2
|u(x , ε z)| +|∇x u(x , ε z)| +ε |uy (x , ε z)| dx dz 2
2
2
2
1 ≤C [aε (u, u)]2 . ε This implies the right-hand inequality in (1.5.15). Integrating (1.5.13) with respect to t and using Lemma 1.5.3 we obtain (1.5.8) and (1.5.10). Now we prove the dissipativity property in (1.5.11). According to the last property (1.5.2) we have for any ε¯ > 0 we have a Cε¯ such that fi (s) ≤ ε¯ s +Cε¯ ,
s ≥ 0.
76
1 Synchronization of Global Attractors and Individual Trajectories
According to this inequality we can conclude for any ε¯ > 0 that
(F(u), u) =
Oi,ε
fi (u(x))u(x)dx ≤δ ≤δ
u(x)
Oi,ε
Oi,ε
0
fi (ξ )d ξ +C|Oi,ε |
|u(x)| 0
ε¯ ξ +Cε¯ d ξ + ε cε¯
≤δ ε¯ u2Xε + ε Cε¯ δ . Therefore, (1.5.12) and (1.5.13) imply that there exist constants β > 0 and C > 0 such that β ∂tΨε (uε (t)) + aε (uε (t), uε (t)) ≤ C, ε
1 1 2 where Ψε (u) = ε 2 uXε + Eε (u) + K. This allows us to apply Theorem 1.2.2 on dissipativity to obtain (1.5.11). Now we prove ( 1.5.9). For this we first establish the following assertion. Lemma 1.5.4. Under the conditions above we have
2 1 . |(F (u)v, v)| ≤ aε (v, v) +Cv2Xε · 1 + ε −2 aε (u, u) 2
(1.5.16)
Proof. We obviously have that |(F (u)v, v)| ≤ cv2Xε + c
Oε
|u|2 |v|2 dx dy.
One can see that
|u|2 |v|2 dx dy ≤vL6 (Oε ) u2 vL6/5 (Oε )
Oε
(1.5.17)
1 ≤vL6 (Oε ) u2L6 (Oε ) vL2 (Oε ) ≤ δ v2L6 (Oε ) + v2Xε u4L6 (Oε ) . δ Using the embedding H 1 (Oi,1 ) ⊂ L6 (Oi,1 ) as above by change of variables we have that
|u(x , y)| dx dy =ε 6
Oi,ε
≤Cε ≤Cε
Oi,1
|u(x , ε z)|6 dx dz
Oi,1
−2
|u(x , ε z)|2 + |∇x u(x , ε z)|2 + ε 2 |uy (x , ε z)|2 dx dz
[aε (u, u)]3 .
Thus, taking δ = c0 ε 2/3 in (1.5.17) with appropriate c0 we obtain that
2 1 1 2 |u| |v| dx dy ≤ aε (v, v) +CvXε aε (u, u) 2 ε Oε
2
2
This implies the conclusion in (1.5.16).
3
(1.5.18)
1.5 A Case Study: Two-Layer Problem in Thin Domains
77
The subsequent argument is formal. It can be made rigorous by invoking the corresponding Galerkin approximations. Let vε (t) = ∂t uε (t). Then, the function vε (t) satisfies the equation
∂t vε (t) + Aε vε (t) = F (uε (t))vε (t).
(1.5.19)
Using Lemma 1.5.4 we derive from (1.5.8) and (1.5.19) that
∂t vε (t)2Xε + aε (vε (t), vε (t)) ≤ c(r)∂t uε (t)2Xε ,
t > 0,
provided that ε −1 aε (u0 , u0 ) ≤ r. This inequality, together with (1.5.10), implies that t∂t uε (t)2Xε +
t 0
τ aε (∂t uε (τ ), ∂t uε (τ ))d τ ≤ ε c(r)(1 + t).
(1.5.20)
Since by (1.5.18) and (1.5.2) Aε u − ∂t uε Xε ≤F(uε )Xε ≤ C ε 1/2 + uε 3L6 (Oε )
3/2 , ≤ Cε 1/2 1 + ε −1 aε (uε , uε ) the relation in (1.5.20) implies (1.5.9). The estimates in Proposition 1.5.2 allow us to conclude the proof of Proposition 1.5.1.
1.5.2 Limiting Problem To describe the limiting behavior of solutions to (1.5.1), we use the problem ut + Au = F(u), u|t=0 = u0 ,
(1.5.21)
in the space HΓ∗ := {u = (u1 , u2 ) : ui ∈ L2 (Γ ), u1 = u2 on Γ∗ }, where A is a positive operator generated by the following bilinear form 2
a(u, v) =
∑
i=1 Γ
(νi ∇x ui · ∇x vi + aui vi ) dx
+
Γ \Γ∗
k0 (x )(u1 (x ) − u2 (x ))(v1 (x ) − v2 (x )) dx ,
where u = (u1 , u2 ) and v = (v1 , v2 ) belong to V := HΓ∗ ∩ H 1 (Γ ) ⊕ H 1 (Γ ) . The nonlinear mapping F(·) is given by F(u) = ( f1 (u1 ), f2 (u2 )), u = (u1 , u2 ) ∈ V.
78
1 Synchronization of Global Attractors and Individual Trajectories
As above, by the standard method (see, for example, Babin/Vishik [7] or Chueshov [36]), we can prove the following assertion. Proposition 1.5.5. The problem in (1.5.21) generates a compact dynamical system φ in the space V given by the formula
φ (t, u0 ) = u(t) = (u1 (t), u2 (t)), u0 = (u10 , u20 ) ∈ V, where u(t) is the solution to the problem in (1.5.21). This system possesses a global attractor A. Proof. The argument is the same as in the proof of Proposition 1.5.1. Remark 1.5.6. Assume that Γ∗ is a subdomain in Γ with smooth boundary. Consider the triple (v1 , v2 , v∗ ), where vi = ui |Γ0 for i = 1, 2 and v∗ = u1 |Γ∗ = u2 |Γ∗ . Using this triple, we can formally rewrite the problem in (1.5.21) in the following way: ⎧ i ⎪ ⎨ ∂t vi − νi Δ vi − (−1) k0 (x )(v1 − v2 ) + avi = fi (vi ), (1.5.22a) x ∈ int Γ0 , t > 0, i = 1, 2, ⎪ ⎩ ∂t v∗ − νΔ v∗ + av∗ = f (v∗ ), x ∈ Γ∗ , t > 0, where ν =
ν1 + ν2 2
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨
∂ vi ∂n
and f (v) =
∂Γ ∩Γ0
f1 (v)+ f2 (v) 2
with the boundary conditions ∂ v∗ = 0, i = 1, 2, = 0, ∂ n ∂Γ ∩Γ ∗
∂ v1 ∂ v2 ∂ v∗ + ν2 + ( ν1 + ν2 ) = 0 for x ∈ ∂ ∗Γ∗ , ν1 ⎪ ⎪ ⎪ ∂ n ∂ n ∂n ⎪ ⎪ ⎪ ⎩ v1 = v2 = v∗ for x ∈ ∂ ∗Γ∗ ,
(1.5.22b)
where n is the corresponding normal vector and ∂ ∗Γ∗ = Γ ∗ ∩ Γ0 .
1.5.3 Thin-Limit Behavior and Asymptotic Synchronization To state our results, we need the average operators along y-axis: [M1ε u1 ](x ) =
1 ε
ε 0
u1 (x , y) dy,
[M2ε u2 ](x ) =
1 ε
0 −ε
u2 (x , y) dy.
Theorem 1.5.7 (Convergence for Finite Time). Assume that the initial data wε0 = (wε1,0 , wε2,0 ) possess the properties (i) Miε wi,0 → ui,0 in the space L2 (Γ ) as ε → 0 and (ii) ε −1 aε (wε0 , wε0 ) ≤ c for ε ∈ (0, 1]. Then, under the conditions above, we have the
1.5 A Case Study: Two-Layer Problem in Thin Domains
79
following relation lim
∑
sup Miε wεi (t) − ui (t)2H s (Γ ) = 0, 0 ≤ s < 1,
ε →0 i=1,2 t∈[0,T ]
(1.5.23)
where (wε1 , wε2 ) is a solution to the problem in (1.5.1) and u = (u1 , u2 ) solves the problem in (1.5.21) with the initial data u0 = (u1,0 , u2,0 ). Proof. We first note that 1 1 Miε wi 2L2 (Γ ) ≤ wi 2L2 (Oi,ε ) , Miε wi 2H 1 (Γ ) ≤ wi 2H 1 (Oi,ε ) . ε ε
(1.5.24)
From Proposition 1.5.2, we obtain 1 1 ∑ wεi (t)2H 1 (Oi,ε ) + ε ε i=1,2
Γ
k(x , ε )|wε1 (t) − wε2 (t)|2 dx ≤ C,
(1.5.25)
for all t ∈ [0, T ] and ε ∈ (0, 1]. Therefore, using relations (1.5.10) we obtain uniformly for ε ∈ (0, 1] 1 ∂t Miε wi 2L2 (Γ ) ≤ ∂t wi 2L2 (Oi,ε ) ≤ C. ε Now, thanks to (1.5.24) and the Aubin–Dubinski–Lions compactness theorem (see, for example, Simon [154, Corollary 4]), we can conclude that (Miε wi )ε ∈(0,1] is relatively compact such that there exists a pair of functions ui (t) ∈ C([0, T ], L2 (Γ )) ∩ L∞ (0, T, H 1 (Γ )), i = 1, 2, and a sequence {εn → 0} such that lim
n→∞
∑
sup Miεn wεi n (t) − ui (t)L2 (Γ ) = 0.
(1.5.26)
i=1,2 t∈[0,T ]
From (1.5.25) and (1.5.5) we derive that u1 (t) = u2 (t) on the set Γ∗ . Considering a variational form of (1.5.1), one can show that the pair (u1 (t), u2 (t)) solves the problem in (1.5.21). By the uniqueness for (1.5.21) the relation in (1.5.26) is valid for every sequence {εn } converging to 0. Thus, by interpolation (1.5.23) follows from uniform bound (1.5.8) and from the convergence in (1.5.26). We also have a result on the upper semicontinuity of the global attractors. Theorem 1.5.8 (Upper Semi-continuity). Let Aε be the global attractor to the problem in (1.5.1) and A be the global attractor to the problem in (1.5.21). Then, ! " lim
sup
ε →0 (w ,w )∈Aε 1 2
inf
∑
i=1,2
Miε wi − vi 2H s (Γ ) : (v1 , v2 ) ∈ A
= 0, 0 ≤ s < 1. (1.5.27)
80
1 Synchronization of Global Attractors and Individual Trajectories
Proof. Basically, the argument is standard (see, for example, Hale/Raugel [87] or Kapitansky/ Kostin [98]) and relies on the uniform estimates listed in Proposition 1.5.2. We repeat them in our case. Assume that (1.5.27) is not true. Then there exist η > 0, εn → 0, and wn = n (w1 , wn2 ) ∈ Aεn such that inf Miεn wni − vi H s (Γ ) : (v1 , v2 ) ∈ A > η . (1.5.28) Let {wn (t) : t ∈ R} be a full trajectory in the attractor Aεn passing through wn (0) =: wn . We also denote zn (t) = M εn wn (t). It follows from (1.5.7) and from Proposition 1.5.2 that
t+1 1 n 2 n 2 ∑ wi (t)H 1 (Oi,ε ) + t ∂t wi L2 (Oi,ε ) d τ ε i=1,2 1 + ε
Γ
k(x , ε )|wn1 (t) − wn2 (t)|2 dx ≤ C
for all t ∈ R. From (1.5.24) we have that
t+1 n 2 n 2 ∂t z L2 (Γ ) d τ ≤ c ∀t ∈ R. ∑ zi (t)H 1 (Γ ) + t
i=1,2
Thus, applying the Aubin–Dubinski–Lions Theorem (see Simon [154, Corollary 4]) to (wni ) we can conclude that there exists v = (v1 , v2 ) ∈ Cb (R, H s (Γ ) × H s (Γ )) ∩ L∞ (R, H 1 (Γ ) × H 1 (Γ )) with s < 1. Hence, we can conclude that sup zni − vi H s (Γ ) → 0, n → ∞, ∀ [a, b] ⊂ R. [a,b]
We also have that 2 dx [w (x, ξ ) − w (x, 0)]d ξ 1 1 Γ ε 0 ε 2 w1y (x, y)2 dxdy ≤ |w1y (x, ξ )|d ξ dx ≤ Cε Γ 0 O
M ε w1 − w1 |Γ 2L2 (Γ ) ≤
ε 1
1,ε
≤Cε aε (w, w). We have a similar situation for the second component. Thus, we obtain zni − wni |Γ 2L2 (Γ ) ≤ CR εn2 . It is clear that v1 = v2 on Γ∗ . From the above estimate we have that v = (v1 , v2 ) solves the limiting equation. In addition, {v(t) : t ∈ R} is a full trajectory that is
1.5 A Case Study: Two-Layer Problem in Thin Domains
81
bounded in HΓ∗ . Owing to the smoothing effect, this trajectory is also bounded in V . Thus, {v(t) : t ∈ R} belongs to the attractor A. Owing to (1.5.3), we have that zn (0) → A, which contradicts (1.5.28). There are several important cases of the possible limits in system (1.5.22). (a) Γ0 = 0/ (and Γ∗ = Γ ) in (1.5.5): we have a single limit equation and the attractor A has the form A := (v, v) : v ∈ A0 , where A0 is the attractor for the system generated by
∂t v − νΔ v + av = f (v),
on Γ , t > 0,
∂ v = 0, ∂ n ∂Γ
(1.5.29)
with ν = ν1 +2 ν2 , f (v) = f1 (v)+2 f2 (v) . Thus, in this case we observe the synchronization at the level of global attractors. (b) Γ0 = 0/ and Γ∗ = 0/ in (1.5.5): in this case, for every element (v1 , v2 ) from the attractor, we have v1 (x ) = v2 (x ) for x ∈ Γ∗ and thus we have a partial synchronization. (c) Γ∗ = 0/ in (1.5.5): in this case there is no synchronization at the level of global attractors. In the case k0 (x ) = 0 we even have a completely unsynchronized behavior A = A2 × A2 , where A2 is the global attractor of each of the two identical subsystems (ν1 = ν2 , f1 = f2 ), which are uncoupled. The behavior described above can be illustrated by the following example, which was studied in detail in Chueshov/Rekalo [137]. Example 1.5.9. Let k(x , ε ) = ε α k0 (x ), where k0 is a positive smooth function and α is a real number. The application of Theorem 1.5.8 and analysis of the corresponding equations (1.5.22) lead to the following scenarios: (i) α > 1: there is no interaction between the limiting components u1 and u2 ; (ii) α = 1: we obtain a coupled system of two parabolic equations on Γ ; (iii) α < 1: the limiting dynamics is described by a single parabolic equation with the averaged diffusion coefficient ν and nonlinearity f (u). The case α < 1 corresponds to the synchronized limiting regime. In the case α ≥ 1, synchronization is absent.
1.5.4 Synchronization for Fixed ε > 0 Now we consider the case of reagents with identical properties and show that under some conditions concerning the intensity of interaction synchronization is possible for finite thinness parameter ε . Namely, the following result holds.
82
1 Synchronization of Global Attractors and Individual Trajectories
Theorem 1.5.10. In addition to (1.5.2)–(1.5.4), assume that
ν1 = ν2 ≡ ν ,
f1 (u) = f2 (u) ≡ f (u);
and also that k(x , ε ) > kε for x ∈ Γ , ε ∈ (0, 1];
lim ε −1 kε = +∞.
ε →0
Then there exists ε0 > 0 such that for all ε ∈ (0, ε0 ] the global attractor Aε for (1.5.1) has the form Aε := (v, v) : v ∈ A0 , where A0 is the attractor for the system generated by (1.5.29). To prove this theorem we apply the same argument as in Theorem 1.3.17, which, in the case considered, relies on the following lemma. Lemma 1.5.11. Let ε −1 kε ≥ d for all ε ≤ ε0 . Then there exists a constant c0 = c0 (d) > 0 such that " ! c0 aε (Qv, Qv) 1/2 : v ∈ Xε ≥ a + 2 , ε < ε0 inf 2 ε QvXε where Q is the additional projector to the vertical averaging, i.e., Qu(x, y) = u(x, y) − M ε u(x) = u(x, y) −
1 2ε
ε −ε
u(x, η )d η .
Proof. Basically, we use the same calculations of the spectrum of Aε as in Chueshov/Raugel/Rekalo [58]. Since k(x , ε ) > kε and hence 2
aε (v, v) ≥ a˜ε (u, u) := ν ∑ ∇x ui 2L2 (OI,ε ) + a · u2L2 (Oε ) i=1
+ kε
Γ
|u1 (x , 0) − u2 (x , 0)2 dx ,
we can consider the spectral boundary value problem, which corresponds to the form a: ˜ ⎧ −νΔ wi = λ 2 wi in Oi,ε , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ∂ wi (x , y) = 0, ∂y w1 y=ε = ∂y w2 y=−ε = 0, . ⎪ ∂n x ∈∂Γ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ (νi ∂y wi − kε (w1 − w2 )) = 0; i = 1, 2, y=0
1.5 A Case Study: Two-Layer Problem in Thin Domains
83
After separation of variables it is clear that the first positive eigenvalue corresponds to the eigenfunction, which is independent of x . Thus, we arrive at the following problem ⎧ −ν∂yy ψ (y) = λ 2 ψ (y), y ∈ (−ε , 0) ∪ (0, ε ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ∂y ψ (−ε ) = ∂y ψ (ε ) = 0, ⎪ ⎪ −ν∂y ψ (+0) + kε (ψ (+0) − ψ (−0)) = 0, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ν∂y ψ (−0) − kε (ψ (+0) − ψ (−0)) = 0. It is easy to see that in the case λ = 0, solutions to this problem have the form ! b1 cos √λν (ε − y), y ∈ (0, ε ), ψ (y) = b2 cos √λν (ε + y), y ∈ (−ε , 0), where λ , b1 , b2 satisfy the equations ⎧ √ λε λε ⎪ ⎨ − νλ b1 sin √ν + kε (b1 − b2 ) cos √ν = 0, ⎪ λε λε ⎩ −√νλ b2 sin √ − kε (b1 − b2 ) cos √ = 0. ν ν This equation has a nontrivial solution when √ λε λε λε kε cos √ − νλ sin √ −kε cos √ ν ν ν det √ λε λε λε −kε cos √ κε cos √ − νλ sin √ ν ν ν
= 0,
which can be written as √ λε λε λε νλ sin2 √ = 2kε sin √ cos √ . ν ν ν The analysis of this equation shows that the minimal positive root λε admits the estimate λε ≥ c(d)ε −1 provided that kε ε −1 ≥ d for some c(d) > 0. This completes the proof of Lemma 1.5.11. Completion of Proof of Theorem 1.5.10. Since M ε commutes with Aε , in the same way as in Proposition 1.5.2 we obtain that 1 ∂t Quε (t)2Xε + aε (Quα ,ε (t), Quε (t)) = (QF(uε (t)), Quε (t))Xε 2
(1.5.30)
84
1 Synchronization of Global Attractors and Individual Trajectories
We also have from 1.5.2 that 2 1 ε 2 f (v(x, y)) − f (v(x, ξ ))d ξ dxdy QF(v)Xε = 2ε −ε Oε ε ε 1 dx dy d ξ | f (v(x, y)) − f (v(x, ξ ))|2 ≤ 2ε Γ −ε −ε ε ε C ≤ dx dy d ξ (1 + r2 (x, y, ξ )) |v(x, y) − (v(x, ξ )|2 ε Γ −ε −ε ε ε 2C ≤ dx dy d ξ (1 + r2 (x, y, ξ )) |Qv(x, y)|2 + |Qv(x, ξ )|2 , ε Γ −ε −ε where r(x, y, ξ ) = |v(x, y)|2 + |v(x, ξ )|2 . Therefore, QF(v)2Xε ≤C
Oε
|Qv(x, y)|2 dxdy +C
Oε
|v(x, y)|4 |Qv(x, y)|2 dxdy
ε ε C + dx dy d ξ |v(x, y)|4 |Qv(x, ξ )|2 ε Γ −ε −ε ≤C Qv2Xε + v4L6 (Oε ) Qv2L6 (Oε ) .
Thus, from (1.5.18) we obtain that
1 QF(v)Xε ≤ C QvXε + aε (v, v) aε ((Qv, Qv) . ε Consequently, using Proposition 1.5.2 from (1.5.30) we obtain that
∂t Quε (t)2Xε + aε (Quα ,ε (t), Quε (t)) ≤ CR Quε (t)2Xε , t ≥ tr . Therefore, using Lemma 1.5.11 we can see that there exists ε0 > 0 such that Quε (t)Xε ≤ e−γ (t−tr ) Quε (tr )Xε , t ≥ tr , for some γ > 0. This means that individual trajectories are asymptotically synchronized and that there is synchronization at the level of global attractors.
1.6 Synchronization in Elastic/Wave Structures In this section, we study asymptotic synchronization at the level of global attractors in a class of coupled second order in time models that arises in dissipative wave and elastic structure dynamics. Under some conditions, we prove that this synchronization arises in the infinite coupling intensity limit and show that for identical subsystems this phenomenon appears for finite intensities. As in the parabolic case
1.6 Synchronization in Elastic/Wave Structures
85
our main idea is related to uniform dissipativity. However, its realization for the second order in time models requires different tools. The simplest finite-dimensional example for the models considered in this section is coupled second-order oscillators of the form 2 ui + ∑ di j uj + ki j u j = f˜i (ui ), i = 1, 2,
(1.6.1)
j=1
where {di j } and {ki j } are nonnegative matrices and f˜i is a smooth function with some dissipativity properties. Our main results are presented in Theorems 1.6.16 and 1.6.18. In particular, Theorem 1.6.16 proves asymptotic synchronization in the limit of large coupling and Theorem 1.6.18, dealing with the interaction of identical systems, shows that synchronization is possible for finite values of the coupling intensity parameter. We also discuss the possibility of synchronization in infinite-dimensional systems by means of finite-dimensional interaction operators. As applications of these results, we consider a range of nonlinear elastic plate models and wave dynamics of different types. Our presentation is based on the paper by Chueshov [41].
1.6.1 The Abstract Model We first describe the problem and state our basic notations and hypotheses. In a Hilbert space X = H × H we consider the following coupled equations ˜ + D11 ut + D12 vt + K11 u + K12 v = F˜1 (u), utt + ν1 Bu ˜ + D21 ut + D22 vt + K21 u + K22 v = F˜2 (v), vtt + ν2 Bv
(1.6.2)
equipped with initial data u(0) = u0 , ut (0) = u1 , v(0) = v0 , vt (0) = v1 , under the following set of hypotheses. Assumption 1.6.1. ˜ in a sep1. B˜ is a self-adjoint positive operator densely defined on a domain D(B) arable Hilbert space H, ν1 , ν2 > 0 are parameters. We assume that the resolvent of B˜ is compact in H. This implies that B˜ is an operator with a discrete spectrum, i.e., there is a complete orthonormal basis {ek } in H consisting of the eigenvectors of the operator B˜ : ˜ k = λk ek , Be
0 < λ1 ≤ λ2 ≤ · · · ,
lim λk = ∞.
k→∞
86
1 Synchronization of Global Attractors and Individual Trajectories
As above, we denote by · and (·, ·) the norm and the scalar product in H = H 0 . We also denote by H s (with s > 0) the domain D(B˜ s ) equipped with the graph norm · s = B˜ s · . H −s denotes the completion of H with respect to the norm · −s = B˜ −s · . The symbol (·, ·) denotes not only the scalar product but also the duality between H s and H −s . Below, we also use the notation X s = H s × H s . 2. The damping interaction operators Di j : H 1/2 → H −1/2 are linear mappings such that the matrix operator D11 D12 D= : X 1/2 → X −1/2 D21 D22 generates a symmetric nonnegative bilinear form on X 1/2 . 3. The elastic interaction operators Ki j : H 1/2 → H are linear and K11 K12 : X 1/2 = H 1/2 × H 1/2 → X = H × H K = K21 K22 generates a symmetric nonnegative bilinear form on X 1/2 . 4. The nonlinear operators F˜i : H 1/2 → H are locally Lipschitz, i.e., for every ρ > 0 there exists a constant L(ρ ) such that F˜i (u) − F˜i (v) ≤ L(ρ )u − v1/2 , i = 1, 2, for all u, v ∈ H 1/2 such that u1/2 , v1/2 ≤ ρ . In addition, we assume that F˜i (u) are potential operators. This means that F˜i (u) = −Πi (u), where Πi : H 1/2 → R is a Frech´et differentiable functional on H 1/2 , i.e., (1.3.18) holds. We also assume that Πi (u) = Π0i (u) + Π1i (u), where Π0i (u) is a nonnegative locally bounded functional on H 1/2 and
∀ η > 0 ∃Cη : |Π1i (u)| ≤ η B˜ 1/2 u2 + Π0i (u) +Cη , u ∈ H 1/2 . (1.6.3) The problem in (1.6.2) can be written as ˜ Utt + BU + DUt + K U = F(U), U(0) = U0 , Ut (0) = U1 ,
(1.6.4)
where the operators D and K are defined above and F˜ (u) u ν 0 ˜ ˜ B, F(U) = ˜1 U= , B= 1 . v F2 (v) 0 ν2 Our main motivation for (1.6.2) (and (1.6.4)) and also for the hypotheses in Assumption 1.6.1 is related to applications to coupled plate and wave systems (see Sect. 1.6.6). The ODE interpretation is obvious in the case of the model in (1.6.1). We also emphasize that in the model we include both couplings in velocities and displacements. The main reason is the desire to compare our results with the corresponding (finite-dimensional) statements in Afraimovich/Chow/Hale [2].
1.6 Synchronization in Elastic/Wave Structures
87
In contrast to the parabolic case our consideration of the second-order models is based on the notion of generalized solutions (see, for example, Showalter [153] and Chueshov/Lasiecka [46, 47]). This approach does not require semigroup generation properties in the explicit form. Here is an adaptation of the standard definition (see, for example, Chueshov/Lasiecka [46]) to our model. Definition 1.6.2. A function U(t) ∈ C([0, T ], X 1/2 ) ∩ C1 ([0, T ], X) possessing the properties U(0) = U0 and Ut (0) = U1 is said to be (S) a strong solution to the problem in (1.6.4) on the interval [0, T ], if • U ∈ W11 (a, b, X 1/2 ) and Ut ∈ W11 (a, b, X) for any 0 < a < b < T , where W11 (a, b, X) = f ∈ C([a, b], X) : f ∈ L1 (a, b, X) , • BU(t) + DUt (t) ∈ X for almost all t ∈ [0, T ]; • equation in (1.6.4) is satisfied in X for almost all t ∈ [0, T ]; (G) a generalized solution to problem (1.6.4) on the interval [0, T ], if there exists a sequence of strong solutions {Un (t)} with initial data (U0n ,U1n ) instead of (U0 ,U1 ) such that # $ lim max Ut (t) −Unt (t) + B1/2 (U(t) −Un (t)) = 0. n→∞ t∈[0,T ]
Application of Theorem 1.5 from Chueshov/Lasiecka [46] gives the following wellposedness result. Theorem 1.6.3. Let T > 0 be arbitrary. Under Assumption 1.6.1 the following statements hold. • Strong Solutions: For every (U0 ,U1 ) ∈ X 1/2 , such that BU0 + DU1 ∈ X there exists a unique strong solution to the problem in (1.6.4) on the interval [0, T ] such that (Ut ,Utt ) ∈ L∞ (0, T, X 1/2 × X), Utt ∈ Cr ([0, T ), X) and
Ut ∈ Cr ([0, T ), X 1/2 ),
BU(t) + DUt (t) ∈ Cr ([0, T ), X),
where we denote by Cr the space of right continuous functions. This solution satisfies the energy relation E (U(t),Ut (t)) +
t 0
(DUt (τ ),Ut (τ ))d τ = E (U0 ,U1 ),
where the energy E is defined by the relation E (U0 ,U1 ) = E1 (u0 , u1 ) + E2 (v0 , v1 ) + Eint (u0 , v0 ),
88
1 Synchronization of Global Attractors and Individual Trajectories
with U0 = (u0 , v0 ), U1 = (u1 , v1 ), Ei (u0 , u1 ) = Ei (u0 , u1 ) + Πi (u0 ) := and
1 u1 2 + νi B˜ 1/2 u0 2 + Πi (u0 ). 2
1 Eint (u0 , v0 ) = (K U0 ,U0 ). 2
• Generalized Solutions: For every (U0 ,U1 ) = X 1/2 × X there exists a unique generalized solution. This solution possesses the property D 1/2Ut ∈ L2 (0, T, X) and satisfies the energy inequality E (U(t),Ut (t)) +
t 0
(DUt (τ ),Ut (τ ))d τ ≤ E (U0 ,U1 ).
(1.6.5)
If U 1 and U 2 are generalized solutions with different initial data and Z = U 1 −U 2 , then Zt (t)2 + B1/2 Z(t)2 + K
1/2
Z(t)2
≤ Zt (0)2 + B1/2 Z(0)2 + K provided that Uti (0)2 + B1/2U i (0)2 + K tive constant depending on the radius R.
1/2U i (0)2
1/2
Z(0)2 eaR t
≤ R2 . Here, aR is a posi-
By Theorem 1.6.3 the problem (1.6.4) generates a dynamical system φ in the space X := X 1/2 × X = H 1/2 × H 1/2 × H × H with the semigroup defined by the relation
φ (t, (U0 ,U1 )) = (U(t),Ut (t)), where U(t) is a generalized solution to the problem in (1.6.4).
1.6.2 Global Attractors In this section, we prove the existence of a global attractor for the dynamical system φ and study its properties. Keeping in mind further application, we impose addi˜ tional hypotheses concerning the damping operator D and the source term F. Assumption 1.6.4. Let Assumption 1.6.1 be in force and 1. D is strictly positive, i.e., c0 > 0 exists such that (DW,W ) ≥ c0 W 2 , W ∈ X 1/2 ;
1.6 Synchronization in Elastic/Wave Structures
89
2. Either F˜i is subcritical, i.e., for every ρ > 0 there exists a constant L(ρ ) such that F˜i (u) − F˜i (v) ≤ L(ρ )u − v1/2−δ , i = 1, 2, δ > 0, for all u, v ∈ H 1/2 such that u1/2 , v1/2 ≤ ρ , or else the potential energies Πi are continuous on H 1/2−δ for some δ > 0 and the mapping u → B˜ −l F˜i (u) is continuous from H 1/2−δ into H −l for some δ , l > 0, i = 1, 2. The hypotheses concerning F˜i in Assumption 1.6.4 are purely infinite-dimensional and arise for a range of the second order in time models (see the survey in Chueshov/Lasiecka [46]). Proposition 1.6.5. Assume 1.6.4. Then the system φ generated by the problem in (1.6.4) is asymptotically compact (see Definition 1.2.4 in Sect. 1.2). Proof. We apply Theorem 3.26 and Proposition 3.36 from Chueshov/Lasiecka [46]. Since |(DV,W )| ≤[(DV,V )]1/2 [(DW,W )]1/2 ≤ C[(DV,V )]1/2 B1/2W ≤Cε (DV,V ) + ε B1/2W 2 , relation (3.60) in Chueshov/Lasiecka [46, p. 54] obviously holds in a simplified form. One can see from the energy inequality in (1.6.5) that the system φ is a gradient system with the full energy E (U0 ,U1 ) as a strict Lyapunov function (see Definition 1.2.8). Therefore, by Corollary 2.29 from Chueshov/Lasiecka [46] (see Theorem 1.2.12) to guarantee the existence of a global attractor we need the boundedness of equilibria. This follows from the next assertion. Theorem 1.6.6. Let Assumptions 1.6.4 be in force. Assume in addition that there exist ν < νi and C ≥ 0 such that
ν B˜ 1/2 u2 − (F˜i (u), u) +C ≥ 0 ,
u ∈ H 1/2 , i = 1, 2.
(1.6.6)
Then, the system φ generated by the problem in (1.6.4) possesses a global attractor. In the ODE case described in (1.6.1), the condition in (1.6.6) means that the function −ui f˜i (ui ) is bounded from below. Proof. Stationary solutions U = (u, v) ∈ X 1/2 solve the problem ˜ + K12 u + K12 v = F˜1 (u), ν1 Bu ˜ + K21 u + K22 v = F˜2 (v). ν2 Bv Using the multipliers u for the first equation and v for the second, as well as the positivity of the operator K , we obtain that
ν1 B˜ 1/2 u2 + ν2 B˜ 1/2 v2 ≤ (F˜1 (u), u) + (F˜2 (v), v).
90
1 Synchronization of Global Attractors and Individual Trajectories
By (1.6.6) this yields B˜ 1/2 u2 + B˜ 1/2 v2 ≤ C (with C independent of D and K ). Thus, the set of stationary solutions is bounded. As was already mentioned, for synchronization phenomena it is important to have bounds for the attractor independent of interaction operators D and K . In spite of the set of stationary solutions being uniformly bounded with respect to D and K , Theorem 1.6.6 does not provide appropriate bounds for the attractor. Below, we use an approach based on Lyapunov-type functions that allows us to prove the uniform dissipativity of the system φ . To simplify the argument, it is convenient to introduce intensity factors α and κ for interactions in velocities and displacements. Moreover, we assume a particular structure of D related to interaction operator K . Thus, instead of (1.6.4) we consider ˜ Utt + BU + (D0 + α K )Ut + κK U = F(U), U(0) = U0 , Ut (0) = U1 , (1.6.7) where α and κ are nonnegative parameters. In addition to Assumption 1.6.1, we impose the following hypotheses. Assumption 1.6.7. 1. The operator D0 is bounded from X 1/2 into X and there exist c0 > 0 and α¯ ≥ 0 such that ((D0 + α¯ K )W,W ) ≥ c0 W 2 , W ∈ X 1/2 ; 2. There exists η < 1 and δ ,C > 0 such that16 ˜ δ Π¯ 0 (U) + (F(U),U) ≤ η B1/2U2 +C ,
U ∈ X 1/2 ,
(1.6.8)
where Π¯ 0 (U) := Π01 (u) + Π02 (v) with U = (u, v). Theorem 1.6.8 (Uniform Dissipativity). Let Assumptions 1.6.1 and 1.6.7 be in force. Then, for every α ≥ α¯ and κ ≥ 0 the system φ on X generated by the problem in (1.6.7) is dissipative with an absorbing ball of the radius R independent of (α , κ) ∈ Λ := {α ≥ α¯ , κ ≥ 0}. More precisely, there exists R independent of (α , κ) ∈ Λ such that the set # $ B = (U0 ,U1 ) ∈ X : U1 2 + B1/2U0 2 + κK 1/2U0 2 ≤ R2 (1.6.9) is absorbing.
16
As was already mentioned in Sect. 1.3 the relation in (1.6.8) is the standard requirement in many evolutionary equations including second order in time models, see, for example, Chueshov/Lasiecka [46] and also the discussion in Sect. 1.4.1.
1.6 Synchronization in Elastic/Wave Structures
91
We note that the estimate in (1.6.9) improves the corresponding finite-dimensional statement in Afraimovich/Chow/Hale [2], which requires uniform boundedness of the ratio α /κ. As is shown in Proposition 1.6.11 we need the latter property for uniform quasi-stability only. Proof. We use a slight modification of the standard method (see, for example, Babin/Vishik [7], Chueshov [36], Temam [161]) based on Lyapunov-type functions. Let U(t) = (u(t), v(t)) be a strong solution, E(t) = E(U(t),Ut (t)) :=
1 Ut (t)2 + B1/2U(t)2 + Π¯ (U(t)) 2
with Π¯ (U) := Π1 (u)+ Π2 (v), and Φ = η (U,Ut )+ μ (K U,U), where η is a positive constant that will be chosen later and 2μ = κ + ηα . We consider the functional V = E + Φ . One can see that there exist 0 < η0 < 1 and βi > 0 independent of (α , κ) such that
β0 E∗ (t) + κK
U(t)2 − β1 ≤ V (t) ≤ β2 E∗ (t) + μ K
1/2
U(t)2 + β3 (1.6.10)
1/2
for all η ∈ (0, η0 ], where E∗ (t) = E∗ (U(t),Ut (t)) :=
1 Ut (t)2 + B1/2U(t)2 + Π¯ 0 (U(t)). 2
We consider only the right-hand side of the inequalities (1.6.10), which follows by (1.6.3). The left-hand side follows in a similar manner. We now have under the assumption 2μ = κ + ηα dV dE d Φ = + dt dt dt 1 = 2(Ut ,Utt ) + 2(B1/2U, B1/2Ut ) + Π¯ (U),Ut 2 + η (U,Utt ) + η (Ut ,Ut ) + μ (K U,Ut ) + μ (K Ut ,U). Plugging in for Utt the terms of (1.6.7), we can replace the right-hand side by = − (Ut , BU) − (D0 + α K )Ut ,Ut ) − (κK U,Ut ) − (Π¯ (U),Ut ) + (B1/2U, B1/2Ut ) + (Π¯ (U),Ut ) − η (U, BU) − η (U, D0Ut ) − η (U, α K Ut ) ˜ − η (U, κK U) + η (U, F(U)) + η Ut 2 + 2μ (K U,Ut ) and hence for strong solutions dV = − ((D0 + α K )Ut ,Ut ) dt
˜ + η Ut 2 − (D0Ut ,U) − (BU,U) − κ(K U,U) + (F(U),U) .
92
1 Synchronization of Global Attractors and Individual Trajectories
Since D0 is bounded from X 1/2 into X, we obtain that |(D0Ut ,U)| ≤ ε B1/2U2 +Cε −1 Ut ||2 , ∀ε > 0. Thus, by Assumption 1.6.7(2) there exist bi > 0 independent of (α , κ) such that
dV ≤ − ((D0 + α K )Ut ,Ut ) − b1 η Ut 2 − b2 η E∗ + κ(K U,U) + η b3 . dt This implies that there exists 0 < η∗ ≤ η0 independent of (α , κ) ∈ Λ such that
dVβ + b2 η E∗ + κ(K U,U) ≤ η b3 dt
(1.6.11)
for all (α , κ) ∈ Λ and η ∈ (0, η∗ ], where Vβ = V + β1 > 0. Now, we split the parametric region Λ into several subdomains. We start with the following case. Let κ∗ > 0 and α∗ > α¯ be fixed. We take ˜ η∗ . Then we take η = κ˜ α −1 . In this case, η ≤ η∗ 0 < κ˜ ≤ κ∗ such that α∗ > κ/ and 1 1 ˜ ≤ κ. μ = (κ + ηα ) = (κ + κ) 2 2 Thus, for all κ ≥ κ∗ and α ≥ α∗ we have that
dVβ + b2 η E∗ + μ (K U,U) ≤ η b3 . dt In particular, (1.6.10) yields dVβ β1 + β3 + b2 ηβ2−1Vβ ≤ η b4 with b4 = b3 + b2 , dt β2 where bi and βi do not depend on α and κ. This implies that −1
Vβ (t) ≤ Vβ (0)e−b2 ηβ2
t
+
b4 β2 . b2
¯ , we can conclude from Since the value b4 b−1 2 β2 is independent of κ ≥ 0 and α ≥ α the previous argument that the set B is absorbing for all κ > 0 and α > α¯ with R independent of κ and α . In the case when κ = 0 from (1.6.11) we obtain dVβ + b2 η E∗ ≤ η b3 . dt Now we take α ≥ η∗−1 and η = α −1 . In this case, η ≤ η∗ and μ = 1/2. Therefore, the conclusion follows from the estimate K 1/2U ≤ cB1/2U by the same argument as above. In the case κ = 0 and α¯ ≤ α ≤ η∗−1 the conclusion is obvious.
1.6 Synchronization in Elastic/Wave Structures
93
Thus, the remaining case is κ > 0 and α = α¯ . Now we can take η = min{η∗ , κ α¯ −1 } when α¯ > 0. It is clear that μ ≤ κ for this case. Thus, we can argue as above. In the case α¯ = 0 the relation μ ≤ κ/2 holds automatically. This completes the proof of Theorem 1.6.8. This theorem and Proposition 1.6.5 immediately imply the following result on the existence of a global attractor. Theorem 1.6.9 (Global Attractor). Let Assumption 1.6.1, 1.6.4 (2) and 1.6.7 be in force. Then for every (α , κ) ∈ Λ the system φ generated by the problem in (1.6.7) possesses a global attractor A. For every full trajectory Y = {(U(t),Ut (t)) : t ∈ R} from the attractor we have that # sup Ut (t)2 + B1/2U(t)2 + κK t∈R
+
∞ −∞
1/2
U(t)2
$
((D0 + α K )Ut (τ ),Ut (τ )) d τ ≤ R2
(1.6.12)
for some R independent of (α , κ) ∈ Λ . Proof. We first apply the standard result on the existence of a global attractor (see Theorem 1.2.6). This attractor belongs to the set B defined in (1.6.9). This implies a uniform bound for the supremum in (1.6.12). Using the energy relation in (1.6.5) we obtain the corresponding bound for the dissipation integral in (1.6.12).
1.6.3 Quasi-Stability The uniform bounds for the attractor given by Theorem 1.6.9 are not sufficient to perform the large coupling limit α → +∞ and/or κ → +∞ in the phase state of the system. In this section, we establish stronger uniform estimates for the attractor size. For this we apply the quasi-stability method (see the discussion in Sect. 1.2.3). To apply the quasi-stability method we need additional hypotheses concerning the nonlinear forces F˜i (u). Assumption 1.6.10. Assume that 1. F˜i (u) = −Πi (u) with the functional Πi : H 1/2 → R, which is a Fr´echet C3 mapping. (2) (3) 2. The second Πi (u) and the third Πi (u) Fr´echet derivatives of Πi (u) satisfy the conditions (2) (1.6.13) Πi (u), v, v ≤ Cρ B˜ σ v2 , v ∈ H 1/2 , for some σ < 1/2, and (3) Πi (u), v1 , v2 , v3 ≤ Cρ B˜ 1/2 v1 B˜ 1/2 v2 v3 ,
vi ∈ H 1/2 ,
(1.6.14)
94
1 Synchronization of Global Attractors and Individual Trajectories
for all u ∈ H 1/2 such that B˜ 1/2 u ≤ ρ , where ρ > 0 is arbitrary and Cρ is a (k)
positive constant. The expression Πi (u), v1 , . . . , vk denotes the value of the (k) derivative Πi (u) on elements v1 , . . . , vk . This assumption concerning nonlinear feedback forces F˜i (u) appeared earlier in the case of systems with nonlinear damping (see Chueshov/Lasiecka [46, p. 98] and also Chueshov/Kolbasin [43]) to cover the case of critical nonlinearities. We note that Assumption 1.6.10 holds in both cases of the von K´arm´an and Berger models (see Chueshov/Lasiecka [46] p. 156 and p. 160 respectively). Moreover, as is shown in Chueshov/Lasiecka [46, p. 137], this Assumption 1.6.10 is also true in the case of the coupled 3D wave equation in a bounded smooth domain O ⊂ R3 of the form utt + σ1 ut − Δ u + k1 (u − v) = f˜1 (u) + g1 (x), u∂ O = 0, (1.6.15) vtt + σ2 vt − Δ v + k2 (u − v) = f˜2 (u) + g2 (x), v = 0, ∂O
provided that f˜i ∈ C2 (R) possesses the property | f˜i (s)| ≤ C(1 + |s|) for all s ∈ R, and that the parameters σi and ki are nonnegative, gi ∈ L2 (O). Thus, our abstract model covers the case of 3D wave dynamics with a critical force term. We refer the reader to Sect. 1.6.6 for further discussion. Recall that the Fr´echet derivatives Π (k) (u) of the functional Π are symmetric k-linear continuous forms on H 1/2 (see, for example, Cartan [23]). Moreover, if ˜ Π ∈ C3 , then (F(u), v) := −Π (u), v is C2 -functional for every fixed v ∈ H 1/2 and the following Taylor expansion holds ˜ + w) − F(u), ˜ −(F(u v) = Π (2) (u); w, v +
1 0
(1 − λ )Π (3) (u + λ w); w, w, vd λ (1.6.16)
for any u, v ∈ H 1/2 (see Cartan [23]). Assume that u and z belong to C1 [a, b], D(B˜ 1/2 ) for some interval [a, b] ⊆ R. By the differentiation rule for the composition of mappings Cartan [23] and using the symmetry of the form Π (2) (u), we have that d Π (2) (u); z, z = Π (3) (u); ut , z, z + 2Π (2) (u); z, zt . dt Therefore, from (1.6.16) we obtain the following representation, which is important in our further considerations: d − (F˜i (u(t) + z(t)) − F˜i (u(t)), zt (t)) = Qi (t) + Ri (t), dt with
t ∈ [a, b] ⊆ R, (1.6.17)
1 (2) Qi (t) = Πi (u(t)); z(t), z(t) 2
(1.6.18)
and 1 (3) Ri (t) = − Πi (u); ut , z, z + 2
1 0
(3)
(1 − λ )Πi (u + λ z); z, z, zt d λ .
(1.6.19)
1.6 Synchronization in Elastic/Wave Structures
95
As we will see below, the representation in (1.6.17) and the hypotheses listed in Assumption 1.6.10 can be avoided if the nonlinear forces F˜i (u) are subcritical, i.e., ∃ σ0 < 1/2 : F˜i (u1 )− F˜i (u2 )| ≤ L(ρ )B˜ σ0 (u1 −u2 ),
∀B˜ 1/2 ui ≤ ρ . (1.6.20)
The following assertion, in fact, is proved in Chueshov/Lasiecka [46] (see (4.38), p. 99), but without control of the parameters α and κ. Proposition 1.6.11. Let Assumptions 1.6.1 and 1.6.7 be in force. In addition we suppose that either Assumption 1.6.10 or Relation (1.6.20) holds. Let M ⊂ X be a bounded forward invariant set and Y i (t) = (U i (t),Uti (t)) = φ (t,Y0i ) with Y0i (t) = (U0i ,U1i ), i = 1, 2, be two solutions to (1.6.7) with (different) initial data Y0i ∈ M. Let Z = U 1 − U 2 . Then, there exist C, γ > 0 such that EZ (t) ≤ CEZ (0)e−γ t +CmaxBσ Z(τ )2 , [0,t]
∀t > 0,
(1.6.21)
where 0 ≤ σ < 1/2 and EZ (t) =
1 Zt (t)2 + B1/2 Z(t)2 + κK 2
1/2
Z2 .
If M is a uniformly bounded set in X with respect to (α , κ) ∈ Λ and (α , κ) ∈ Λβ := {(α , κ) ∈ Λ : α ≤ β (1 + κ)}
(1.6.22)
for some β > 0, then the constants C, γ are independent of (α , κ), but may depend on β . Proof. We use the same line of argument as in Chueshov/Lasiecka [46] and start with the following relation (which follows from Lemma 3.23 in Chueshov/Lasiecka [46]): T T EZ (t)dt ≤c ((D0 + α K )Zt , Zt )dt T EZ (T ) + 0 0 % T 1 2 |((D0 + α K )Zt , Z)| dt + ΨT (U ,U ) + 0
for every T ≥ T0 ≥ 1, where c > 0 does not depend on α , κ, T and T 1 2 ΨT (U ,U ) = (G(τ ), Zt (τ ))d τ 0 T T T + (G(t), Z(t))dt + dt (G(τ ), Zt (τ ))d τ 0
0
t
96
1 Synchronization of Global Attractors and Individual Trajectories
with ˜ 2 (t)). ˜ 1 (t)) − F(U G(t) = F(U To obtain this estimate in one step we consider the time derivative of EZ (t) and integrate. We obtain T
EZ (T ) +
t
(D0U + α K )Zt , Zt )d τ = EZ (t) +
T t
(G(τ ), Zt )d τ .
In a second step, we multiply (1.6.7) by Z and apply d (Z, Zt ) = (Zt , Zt ) + (Ztt , Z). dt Since every point (α , κ) ∈ Λ belongs to Λβ for some β > 0, it is sufficient to consider the case when (α , κ) ∈ Λβ for some β . In this case we have that |((D0 + α K )Zt , Z)| ≤((D0 + α K )Zt , Zt )1/2 ((D0 + α K )Z, Z)1/2 ≤Cε ,β ((D0 + α K )Zt , Zt ) + ε EZ (t) for every ε > 0. Thus, choosing ε in an appropriate way we obtain that T
T EZ (T ) +
0
EZ (t)dt ≤ cβ
T 0
((D0 + α K )Zt , Zt )dt + c0ΨT (U 1 ,U 2 ). (1.6.23)
Under Assumption 1.6.10, we have from Proposition 4.13 in Chueshov/Lasiecka [46] that for any ε > 0 and T > 0 there exist a(ε , T ) = aM (ε , T ) and b(ε , T ) = bM (ε , T ) such that T sup (G(τ ),Zt (τ ))d τ ≤ ε t∈[0,T ]
t
+ a(ε , T )
T
T 0
Zt (τ )2 + B1/2 Z(τ )2 d τ
(1.6.24)
d(τ )B1/2 Z(τ )2 d τ + b(ε , T ) sup Bσ Z(τ )2 τ ∈[0,T ]
0
for all ε > 0, where σ < 1/2 and d(t) := d(t;U 1 ,U 2 ) = Ut1 (t)2 + Ut2 (t)2 . Recall that M is a forward invariant bounded set. Obviously, the same relation (1.6.24) (even with a(ε , T ) ≡ 0) remains true in the subcritical case (1.6.20). Thus,
ΨT (U 1 ,U 2 ) ≤ε
T 0
EZ (τ )d τ
+ a(ε , T )
T 0
d(τ )B1/2 Z(τ )2 d τ + b(ε , T ) sup Bσ Z(τ )2 τ ∈[0,T ]
1.6 Synchronization in Elastic/Wave Structures
97
for every ε > 0. From the energy relation we also obtain that T 0
((D0 + α K )Zt , Zt )dt ≤ EZ (0) − EZ (T ) + ΨT (U 1 ,U 2 ).
Plugging in (1.6.23) thus after appropriate choice of ε and T we arrive at the relation EZ (T ) ≤ qEZ (0) + a(T )
T 0
d(τ )B1/2 Z(τ )2 d τ + b(T ) sup Bσ Z(τ )2 , τ ∈[0,T ]
where q < 1 and all constants depend on β . This inequality allows us to apply the same procedure as in Chueshov/Lasiecka [46, p. 100] based on the Gronwall lemma to obtain (1.6.21). Now we are in position to obtain a result on the finiteness of the fractal dimension of the attractors and also additional bounds for trajectories from these attractors. Theorem 1.6.12. Let Assumption 1.6.1 and 1.6.7 be in force. In addition, we suppose that either Assumption 1.6.10 or relation (1.6.20) holds. Then for any (α , κ) ∈ Λ the following assertions hold: 1. The global attractor Aα ,κ of the system φ on X generated by (1.6.7) has a finite fractal dimension dim f Aα ,κ . 2. This attractor Aα ,κ lies in X 1 × X 1/2 and for every full trajectory Y = {(U(t),Ut (t)) : t ∈ R} from the attractor in addition to the bound in (1.6.12) we have that # $ sup Utt (t)2 + B1/2Ut (t)2 + κK 1/2Ut (t)2 ≤ R1 (β )
(1.6.25)
t∈R
for some R1 (β ) independent of (α , κ) ∈ Λβ , where β > 0 can be arbitrary and Λβ is given by (1.6.22). Proof. By Proposition 1.6.11 the system φ is quasi-stable on every bounded forward invariant set. Thus, we can apply Theorems 3.4.18 and 3.4.19 from Chueshov [40] to prove statements 1 and 2 (see also Theorem 1.2.22). Remark 1.6.13 (Regular Semicontinuity). Similar to the parabolic case (see Proposition 1.3.14) we prove that under the conditions of Theorem 1.6.12 the attractors Aα ,κ are upper semicontinuous at every point (α∗ , κ∗ ) ∈ Λ , i.e., lim dX (Aα
n→∞
n ,κ n
, Aα∗ ,κ∗ ) = 0
(1.6.26)
for every sequence {(α n , κ n )} ⊂ Λ such that (α n , κ n ) → (α∗ , κ∗ ) ∈ Λ as n → ∞. To see this, we can apply Theorem 1.2.14. Indeed, let (α n , κ n ) → (α∗ , κ∗ ) ∈ Λ as n → ∞. In particular, {(α n , κ n )} belongs to a bounded set in Λβ . Thus, it follows
98
1 Synchronization of Global Attractors and Individual Trajectories
from (1.6.25) that the attractor Aαn ,κn belongs to the set (U0 ,U1 ) : BU0 2 + B1/2U1 2 ≤ C2 , which is compact in X . Thus, we only need to show property (ii) in Theorem 1.2.14. Let (U0n ,U1n ) ∈ Aαn ,κn and (U0 ,U1 ) ∈ Aα∗ ,κ∗ . One can see that (Z(t), Zt (t)) = φ αn ,κn (t,U0n ,U1n ) − φ α∗ ,κ∗ (t,U0 ,U1 ) = (U n (t) −U(t),Utn (t) −Ut (t)) satisfies the equation Ztt + BZ + (D0 + α∗ K )Zt + κ∗ K Z = δ F n , where ˜ n ) − F(U). ˜ δ F n = −(αn − α∗ )K Utn − (κn − κ∗ )K U n + F(U On the attractors we obviously have that δ F n ≤ c1 (|αn − α∗ | + |κn − κ∗ |) + c2 B1/2 Z. Therefore, the standard energy-type calculations and the Gronwall lemma give the estimate φ αn ,κn (t,Y0n )− φ α∗ ,κ∗ (t,Y0 )X ≤ C(|αn − α∗ |+|κn −κ∗ |+Y0n −Y0 X )eat , t > 0, where Y0n = (U0n ,U1n ) and Y0 = (U0 ,U1 ). Thus, we can apply Theorem 1.2.14. Remark 1.6.14. The results stated in Theorems 1.6.3 and 1.6.12 deal with a general model of the form (1.6.4) or (1.6.7); thus, they can also be applied in the case of several interacting second order in time equations of the form N ˜ i + ∑ Di j utj + α Ki j utj + κKi j u j = F˜i (ui ), i = 1, . . . , N, utti + νi Bu
(1.6.27)
j=1
under obvious changes in the set of hypotheses concerning the operators in (1.6.27). The same is true concerning upper semicontinuity mentioned in Remark 1.6.13.
1.6.4 Asymptotic Synchronization Now we apply the results above to synchronization. We change on the interaction operators K of the standard form (see, for example, Hale [86] and the references therein) to the symmetric form. Namely, we assume that 1 −1 K = K, (1.6.28) −1 1
1.6 Synchronization in Elastic/Wave Structures
99
where K is a strictly positive operator in H with domain D(K) ⊇ H 1/2 . Thus, we consider the following problem ˜ + D011 ut + D012 vt + α K(ut − vt ) + κK(u − v) = F˜1 (u), utt + ν1 Bu ˜ + D021 ut + D022 vt + α K(vt − ut ) + κK(v − u) = F˜2 (v), vtt + ν2 Bv
(1.6.29)
with initial data u(0) = u0 , ut (0) = u1 , v(0) = v0 , vt (0) = v1 . All theorems stated above can be applied to this situation of a system with two equations. Our goal is to study asymptotic synchronization phenomena and we are interested in qualitative behavior of the system in the large coupling limit κ → ∞ (and/or α → ∞). It is clear from the bound of the attractor given in (1.6.12) that it is reasonable to assume that u = v is the limit. Therefore, we need to consider a limiting problem of the form ˜ + Dwt = F˜ m (w), w(0) = w0 , wt (0) = w1 . wtt + ν Bw where
ν=
(1.6.30)
1 2 ν1 + ν2 1 , D = ∑ D0i j , F˜ m (w) = (F˜1 (w) + F˜2 (w)). 2 2 i, j=1 2
Obviously, the argument above can be applied to system (1.6.30) provided that the damping operator D is not degenerate. In fact, we can easily prove the following assertion. Proposition 1.6.15. Let Assumption 1.6.1(1,4), and 1.6.7(2) be in force. Let D : H 1/2 → H be a strictly positive operator. In addition, assume that either Assumption 1.6.10 or the relation in (1.6.20) is valid. Then, the problem in (1.6.30) generates a dynamical system in the space H 1/2 × H possessing a global attractor of finite fractal dimension. This attractor is a bounded set in H 1 × H 1/2 . Below, we show that the attractors Aα ,κ for problems (1.6.29) in some sense converge to the attractor of the limiting system (1.6.30) when κ → +∞. Theorem 1.6.16. Let Assumption 1.6.1, 1.6.4(ii) and 1.6.7 be in force with K of the form given in (1.6.28). Then, for every (α , κ) ∈ Λ , the system φ on X generated by the problem in (1.6.29) possesses a global attractor Aα ,κ . For every full trajectory Y = {(U(t),Ut (t)) : t ∈ R} with U(t) = (u(t), v(t)) from the attractor sup Ut (t)2 + B1/2U(t)2 + κK 1/2 (u(t) − v(t))2 + t∈R
α for some R independent of (α , κ) ∈ Λ .
+∞ −∞
% Ut (τ )2 d τ
≤ R2
(1.6.31)
100
1 Synchronization of Global Attractors and Individual Trajectories
In addition assume that either Assumption 1.6.10 or the relation in (1.6.20) holds. Then, for any (α , κ) ∈ Λ , the following assertions hold: 1. The global attractor Aα ,κ of the system φ generated by (1.6.29) has a finite fractal dimension dim f Aα ,κ . 2. This attractor Aα ,κ lies in X 1 × X 1/2 and for every full trajectory Y = {(U(t),Ut (t)) : t ∈ R} from the attractor in addition to the bound in (1.6.31) we also have that # $ sup Utt (t)2 + B1/2Ut (t)2 + κK 1/2 (ut (t) − vt (t))2 ≤ R21 (β ) (1.6.32) t∈R
for some R1 (β ) independent of (α , κ) ∈ Λβ , where β > 0 can be arbitrary and Λβ is given by (1.6.11). 3. The attractors Aα ,κ are upper semicontinuous at every point (α∗ , κ∗ ) ∈ Λ , i.e., (1.6.26) is valid for every sequence {(α n , κ n )} ⊂ Λ such that (α n , κ n ) → (α∗ , κ∗ ) ∈ Λ as n → ∞. We recall that the set Λ is defined in the statement of Theorem 1.6.8. 4. Let u → F˜i (u) be weakly continuous from H 1/2 into some space H −l , l ≥ 0. Then, in the limit κ → ∞ we have that = 0, lim dXε (Aα ,κ , A)
κ→∞
where Xε = X 1/2−ε × X 1/2−ε and = (u0 , u0 , u1 , u1 ) : (u0 , u1 ) ∈ A . A
(1.6.33)
(1.6.34)
Here, A is the global attractor for the dynamical system generated by (1.6.30). Moreover, if instead of the weak continuity of F˜i we assume that ν1 = ν2 and K ˜ then the convergence in (1.6.33) holds in the space X 1/2 × commutes17 with B, 1/2− ε ⊂X. X Remark 1.6.17. • The result in (1.6.33) means that the components of the system become synchronized at the level of global attractors in the limit of a large intensity parameter κ with fixed or even absent interaction in velocities. In particular, this implies that every solution U(t) = (u(t), v(t)) to (1.6.29) demonstrates the following synchronization phenomenon: ∀ δ > 0 ∃ κ∗ : lim sup ut (t) − vt (t)2 + B˜ 1/2 (u(t) − v(t))2 ≤ δ , ∀ κ ≥ κ∗ . t→+∞
17
We can take K = B˜ σ with some 0 ≤ σ ≤ 1/2, for instance.
1.6 Synchronization in Elastic/Wave Structures
101
• To achieve a synchronized regime we can even assume the absence of damping in one of the equations in (1.6.29). For instance, the conclusions of Theorem 1.6.16 remain true if we take D11 = γ · id with γ > 0 and D12 = D21 = D22 = 0. • The same conclusion as in (1.6.33) can be obtained in the limit (α , κ) → +∞ inside of Λβ for some β . However, as one can see from (1.6.33) large α is not necessary for asymptotic synchronization. This observation improves the result established in Hale [86] for finite-dimensional systems, which requires for synchronization both parameters α and κ to be large. • The possibility to obtain synchronization for fixed small κ and large α is problematic. The point is that in the case κ = 0 under appropriate requirement on the nonlinear forces F˜i there are possible two different stationary solutions that demonstrate the absence of asymptotic synchronization. Proof of Theorem 1.6.16. All results except the last one easily follow from Theorem 1.6.12. Thus, we need to establish property (1.6.33) only. As in Hale/Lin/Raugel [89], Kapitansky/Kostin [98] (see also the proof of Theorem 1.5.8) we apply a contradiction argument. Assume that (1.6.33) is not true. Then there exist sequences {κn → ∞} and Y0n ∈ α A ,κn such that ≥ δ > 0, n = 1, 2, . . . dXε (Y0n , A) Since Y0n ∈ Aα ,κn , there exists a full trajectory Y n = {(U n (t),Utn (t)) : t ∈ R} from the attractor Aα ,κn such that Y n (0) = Y0n . It follows from (1.6.31) and (1.6.32) and also from the Aubin–Dubinski–Lions Theorem (see Simon [154], Corollary 4) that the sequence Y n is compact in C([a, b], Xε ) for every a < b and weak-star compact in L∞ (R, X 1/2 × X 1/2 ). Thus, there exists = (u(t), v(t)) ∈ C1 (R, X 1/2−ε ) ∩ L∞ (R, X 1/2 ) U(t) t ∈ L∞ (R, X 1/2 ) such that along a subsequence with U # $ n 2 t (t)2 lim sup Utn (t) − U 1/2−ε + U (t) − U(t)1/2−ε → 0 n→∞ t∈[a,b]
(1.6.35)
for all a < b, and U t (t)) weak star in L∞ (R, X 1/2 × X 1/2 ), n → ∞. (U n (t),Utn (t)) → (U(t), Since K is strictly positive, it follows from (1.6.31) and (1.6.32) that R2 → 0, n → ∞. sup utn (t) − vtn (t)2 + un (t) − vn (t)2 ≤ κn t∈R
102
1 Synchronization of Global Attractors and Individual Trajectories
By interpolation supB˜ 1/2−ε (utn (t) − vtn (t)) t∈R % 1−2ε 1/2 n 1/2 n n n 2ε ˜ ˜ ≤ C sup B ut (t) + B vt (t) ut (t) − vt (t) t∈R
≤ C sup utn (t) − vtn (t)2ε → 0, n → ∞. t∈R
Hence,
sup B˜ 1/2−ε (un (t) − vn (t)) → 0, n → ∞. t∈R
Since u → F˜i (u) is weakly continuous for i = 1, 2, these observations allow us to make a limit transition in the variational form of the sum of equations (1.6.29) and = (u(t), u(t)), where u(t) is a solution to (1.6.30). Moreover, conclude that U(t) (u(t), ut (t)) is a trajectory bounded in X 1/2 . Thus, it belongs to the attractor A. It is also clear from (1.6.35) that in Xε , t (0)) ∈ A Y0n = (U n (0),Utn (0)) → (U(0), U which is impossible. In the case when ν1 = ν2 and K commutes with B˜ taking the sum of equations (1.6.2) we can find that ˜ sup B(u(t) + v(t)) ≤ C(Rβ )
(1.6.36)
t∈R
for every trajectory (u(t), v(t), ut (t), vt (t)) from the attractor Aα ,κ with (α , κ) ∈ Λβ . Taking the difference of (1.6.2) we obtain that ˜ sup Bz(t) + 2κKz(t) ≤ C(Rβ ) with z(t) = u(t) − v(t)
(1.6.37)
t∈R
for fixed α . Since B˜ and K commutes, ˜ + 2κKz2 = Bz ˜ 2 + 4κ 2 Kz2 + 4κ(Bz, ˜ Kz) Bz ˜ 2 + 4κ 2 Kz2 + 4κB˜ 1/2 K 1/2 z2 ≥ Bz ˜ 2. = Bz Thus, (1.6.36) and (1.6.37) yield the following additional estimate on the attractor: ˜ ˜ sup Bu(t) + Bv(t) ≤C t∈R
with the constant C independent of κ. This provides us with compactness of U n (t) in the space C([a, b], X 1−ε ) for every ε > 0 and makes it possible to improve the statement in (1.6.33).
1.6 Synchronization in Elastic/Wave Structures
103
Now we consider the case of identical interacting subsystems, i.e., we assume that
ν1 = ν2 = ν , D011 = D022 = D, D012 = D021 = 0, F˜1 (w) = F˜2 (w).
(1.6.38)
In this case we observe asymptotic synchronization for finite values of κ. Theorem 1.6.18. Let the hypotheses of Theorem 1.6.16 and also relations (1.6.38) be in force. Assume that α ∈ [α¯ , α∗ ] for some fixed α∗ . Let ˜ w) + κ(Kw, w) : w ∈ H 1/2 , w = 1 . sκ = inf ν (Bw, There exists s∗ = s∗ (α¯ , α∗ ) and γ > 0 such that under the condition18 sκ ≥ s∗ the property of asymptotic exponential synchronization holds, i.e., # $ (1.6.39) lim eγ t ut (t) − vt (t)2 + B˜ 1/2 (u(t) − v(t))2 = 0 t→∞
for all κ for every solution U(t) = (u(t), v(t)) to (1.6.29). In this case, Aα ,κ = A such that sκ ≥ s∗ , A is given by (1.6.34). Proof. In the case considered, w = u − v satisfies the equation ˜ + Dwt + 2α Kwt + 2κKw = F˜1 (u) − F˜2 (v), w(0) = w0 , wt (0) = w1 , wtt + ν Bw where w0 = u0 − v0 and w1 = u1 − v1 . If we consider the case of the critical nonlinearity, the subcritical case is much simpler. In the former case we use the representation (1.6.17) with z = w. Since F˜1 = F˜2 , below we omit the subscript i. It follows from (1.6.13) and (1.6.14) that the variables Q1 = Q2 = Q and R1 = R2 = R defined in (1.6.18) and (1.6.19) admit the estimates |Q(t)| ≤ CR B˜ σ w(t)2 and |R(t)| ≤ CR (ut (t) + vt (t))B˜ 1/2 w(t)2 under the condition B˜ 1/2 u(t)2 + B˜ 1/2 v(t)2 ≤ R2
(1.6.40)
with R and thus CR independent of α and κ. We consider a Lyapunov type function of the form + Φ (t), Ψ (t) = E(t)
18
One can see that sκ ≥ κ · inf spec(K). Thus, if K is not degenerate, then sκ → +∞ as κ → +∞.
104
1 Synchronization of Global Attractors and Individual Trajectories
where
= 1 wt (t)2 + ν B˜ 1/2 w(t)2 + Q(t) E(t) 2
and
Φ (t) = η (w, wt ) + μ (Kw, w). Here, η is a positive constant that will be chosen later and 2μ = κ + ηα . By uniform dissipativity of the system φ α ,κ we can assume that (1.6.40) holds with the same R as in (1.6.9) for all t ≥ t∗ . We can see that there exists 0 < η0 < 1 and βi > 0 independent of (α , κ) such that
β0 E0 (t) + κK 1/2 w(t)2 − cR w(t)2
≤ Ψ (t) ≤ β2 E0 (t) + cR w(t)2 + μ K 1/2 w(t)2 ,
for all t ≥ t∗ and 0 < η < η0 , where E0 (t) =
1 wt (t)2 + ν B˜ 1/2 w(t)2 . 2
Now for strong solutions we calculate the derivative dΨ = − ((D + α K)wt , wt ) − R(t) dt
˜ w) − 2κ(Kw, w) + (F˜1 (u) − F˜1 (v), w) + η wt 2 − (Dwt , w) − ν (Bw, Since D is bounded from H 1/2 into H, we obtain that |(Dwt , w)| ≤ ε B˜ 1/2 w2 +Cε −1 wt ||2 , ∀ε > 0. Thus, there exist bi > 0 independent of (α , κ) such that
dΨ ≤ − ((D + α K)wt , wt ) − b1 η wt 2 +CR (ut + vt )B˜ 1/2 w2 dt
− b2 η E0 (t) + κK 1/2 w2 + η cR w(t)2 , Fixing α and taking sκ large enough we obtain that dΨ + γΨ (t) −CR (ut 2 + vt 2 )B˜ 1/2 w2 ≤ 0, t ≥ t∗ , dt for some γ ,C > 0. Using the finiteness of the dissipativity integrals: ∞ 0
(ut 2 + vt 2 )dt < ∞,
for the attractors follows from (1.6.39). we obtain (1.6.39). The equality Aα ,κ = A
1.6 Synchronization in Elastic/Wave Structures
105
If F˜ is critical, but does not satisfy the structural hypothesis in Assumption 1.6.10, we can still guarantee asymptotic exponential synchronization. However, in this case we need the additional condition that the damping parameter α is large enough and K is not degenerate. The role of large damping is the same as in the existence of global attractors in the case of critical forces, see, for example, Chueshov/Lasiecka [46, p. 85] or Chueshov [40, Section 6.3.3]. If F˜ are globally Lipschitz we can even avoid the requirement of dissipativity of the system. Remark 1.6.19 (Complete Replacement Synchronization). We can show that in some special cases the system in (1.6.29) demonstrates the complete replacement synchronization effect. For instance, we can consider the system ˜ + D1 ut + κK(u − v) = F˜1 (u), utt + ν1 Bu ˜ + D2 vt + κK(v − u) = F˜2 (v), vtt + ν2 Bv
(1.6.41)
under the following conditions • B˜ and F˜2 satisfy the corresponding conditions in Assumption 1.6.1; • Di : H 1/2 → H is strictly positive and K is a strictly positive operator in H with domain D(K) ⊇ H 1/2 ; • F˜1 (u) is globally Lipschitz and subcritical, i.e., ∃ σ0 < 1/2 : F˜1 (u) − F˜1 (u) ¯ ≤ LB˜ σ0 (u − u), ¯
∀u, u¯ ∈ H 1/2 .
The corresponding response equation has the form u¯tt + ν1 B˜ u¯ + D1 u¯t + κK u¯ = F˜1 (u) ¯ + κKv(t),
(1.6.42)
where v(t) is the v-component of the solution to (1.6.41). Using the same approach as in the proof Theorem 1.6.3 and the methods developed in Chueshov/Lasiecka [46] we can show the existence and uniqueness theorem for both systems (1.6.41) and (1.6.42). To obtain the results on complete replacement synchronization we need to consider the difference z = u − u, ¯ which solves the equation ˜ + D1 zt + κKz = F˜1 (u) − F˜1 (u). ¯ ztt + ν1 Bz We can use a Lyapunov-type functional
Ψ (z, zt ) =
1 zt 2 + ν1 B˜ 1/2 z2 + κz2 + η (z, zt ), 2
with κ large and with an appropriate choice of η > 0. The standard calculations lead to the relation # $ 2 lim eγ t ut (t) − u¯t (t)2 + B˜ 1/2 u(t) − u(t)) ¯ = 0, t→∞
106
1 Synchronization of Global Attractors and Individual Trajectories
where γ > 0, for every solution U(t) = (u(t), v(t)) to (1.6.41) and for every u(t), ¯ satisfying (1.6.42). This means the complete replacement synchronization with an exponential speed (see the general discussion in the Introduction). Remark 1.6.20. The results similar to Theorems 1.6.16 and 1.6.18 can also be established for N coupled second order in time equations of the form ˜ 1 + D1 ut1 + α K(ut1 − ut2 ) + κK(u1 − u2 ) = F˜1 (u1 ), utt1 + ν1 Bu ˜ j + Di utj − α K(utj+1 − 2utj + utj−1 ) uttj + ν j Bu − κK(u j+1 − 2u j + u j−1 ) = F˜ j (u j ), j = 2, . . . , N − 1, ˜ N + DN utN + α K(utN − utN−1 ) + κK(uN − uN−1 ) = F˜N (uN ). uttN + νN Bu This system can be reduced to (1.6.7) with ⎛ ⎞ ⎛ 1 −1 0 D1 0 0 . . . 0 ⎜−1 2 −1 ⎜ 0 D2 0 . . . 0 ⎟ ⎜ ⎟ ⎜ ⎜ ⎟ ⎜ D0 = ⎜ 0 0 D3 . . . 0 ⎟ , K = ⎜ 0 −1 2 ⎜ .. .. .. ⎜ .. .. .. . . .. ⎟ ⎝ . . . ⎝ . . . . . ⎠ 0 0 0 0 0 0 . . . DN
... ... ... .. .
⎞ 0 0⎟ ⎟ 0⎟ ⎟ K. .. ⎟ .⎠
... 1
Thus, the general results of this section can be applied with the same hypotheses ˜ Di , K and F˜i . We note that the energy for this N coupled concerning the operators B, model has the form
N 1 κ N−1 j 2 1/2 j 2 j ˜ ut + ν j B u + Π j (u ) + ∑ K 1/2 (u j+1 − u j )2 . E =∑ 2 j=1 j=1 2 In the ODE case (νi = 0, K = id) synchronization for this model was considered in Hale [86] with assumption that both α and κ become large or even tend to infinity. Our approach allows us to observe asymptotic synchronization for fixed α and in the limit κ → +∞ (for identical subsystems it is sufficient to assume that κ is large enough). The limiting (synchronized) regime is described by the problem in (1.6.30) with 1 N 1 N 1 N ν = ∑ ν j , D = ∑ D j , F m = ∑ F˜ j (w). N j=1 N j=1 N j=1 In the case of a plate with the Berger nonlinearity, the same result was obtained in Naboka [122] with D0 j = d j · id, α = 0 and K = id. We note that the models above admit other generalizations. For instance, we can consider instead of −F˜i (u) = Πi (u) non-conservative forces, which also depend on velocities. The corresponding theory of the second-order models was developed in Chueshov/Lasiecka [46].
1.6 Synchronization in Elastic/Wave Structures
107
1.6.5 On Synchronization by Means of Finite-Dimensional Coupling One can see from the argument given in Theorem 1.6.18 that asymptotic synchronization can be achieved even with a finite-dimensional coupling operator. Indeed, the only condition we need is ˜ w) + κ(Kw, w) ≥ cw2 , ∀ w ∈ H 1/2 , ν (Bw, with appropriate c > 0 depending on the size of an absorbing ball. As was already mentioned in Sect. 1.3.7 if K is a strictly positive operator, then the requirement sκ ≥ s∗ holds for a large intensity parameter κ. However, it is not necessary to assume non-degeneracy of the operator K to guarantee large sκ . For instance, by the same calculations as in Sect. 1.3.7 if K = PN is the orthoprojector onto Span{ek : k = 1, 2, . . . , N}, then ˜ w)+κ(Kw, w) ≥ min{νλ1 + κ, νλN+1 }w2 . ν (Bw, Thus, if κ ≥ ν (λN+1 − λ1 ), then we can guarantee large sκ by an appropriate choice of N. Then, in the same way as in Sect. 1.3.7, we arrive at the following situation. Consider the class RL of interpolation operators that are related to the set of functionals L = {l j } on H 1/2 . An operator K belongs to RL if it has the form N
Kv =
∑ l j (v)ψ j ,
∀ v ∈ H 1/2 ,
(1.6.43)
j=1
where {ψ j } is an arbitrary finite set of elements from H 1/2 . An operator K ∈ RL is called a Lagrange interpolation operator, if it has the form (1.6.43) with {ψ j } such that lk (ψ j ) = δk j . In the case of Lagrange operators we have that l j (u − Ku) = 0 and thus (1.3.77) with σ = 1/2 yields u − Ku ≤ εL u1/2 , v ∈ H 1/2 , 1/2
where εL = εL is the completeness defect defined in (1.3.77). Unfortunately, in the general case an interpolation operator of the form (1.6.43) is not symmetric and positive. Therefore, in contrast to the parabolic case (see Sect. 1.3.7), we cannot apply the result on uniform dissipativity of K of the form (1.6.43). The situation requires a separate consideration and possibly another set of hypotheses concerning the model. Here, we give only one particular result in this direction. We consider the following version of the equations (1.6.29) ˜ + Dut + κK(u − v) = F˜1 (u), utt + ν Bu ˜ + Dvt + κK(v − u) = F˜1 (v), vtt + ν Bv
(1.6.44)
108
1 Synchronization of Global Attractors and Individual Trajectories
with initial conditions u(0) = u0 , ut (0) = u1 , v(0) = v0 , vt (0) = v1 .
˜ Theorem 1.6.21. Assume that B˜ satisfies Assumption 1.6.1(1), F(u) is globally Lipschitz and subcritical, i.e., ∃ σ0 < 1/2 : F˜1 (u1 ) − F˜1 (u2 ) ≤ LF˜1 B˜ σ0 (u1 − u2 ),
∀ui ∈ H 1/2 .
Let D : H 1/2 → H be a strictly positive operator and K ≡ KL be a Lagrange interpolation operator for some family L of linear continuous functionals {l j : j = 1, . . . , N} on H 1/2 . Then, for every initial data U0 = (u0 , v0 ) ∈ X 1/2 and U1 = (u1 , v1 ) ∈ X, the problem in (1.6.44) has a unique generalized solution U(t) = (u(t), v(t)) and there exist κ∗ > 0 and ε∗ (κ) such that for every κ ≥ κ∗ and εL ≤ ε∗ (κ) the solution U(t) = (u(t), v(t)) is asymptotically synchronized, i.e., the relation in (1.6.39) holds with some γ > 0. Proof. This is the globally Lipschitz case and therefore the well-posedness easily follows from Chueshov/Lasiecka [46, Theorem 1.5]. Following the same idea as in Theorem 1.6.18 we find that the difference w = u − v satisfies the equation ˜ + Dwt + 2κw + Gκ,L (u, v) = 0, w(0) = w0 , wt (0) = w1 . wtt + ν Bw
(1.6.45)
where w0 = u0 − v0 , w1 = u1 − v1 and Gκ,L (u, v) = −2κ(id − KL )(u − v) − (F˜1 (u) − F˜1 (v)). We have the obvious estimate Gκ,L (u, v) ≤ 2κ εL B˜ 1/2 w + LF˜1 B˜ σ0 w ≤ 2(κ εL + δ )B˜ 1/2 w +Cδ w for all δ > 0 by some interpolation argument. This implies |(Gκ,L (u, v), w)| ≤ c(κ εL + δ )2 B˜ 1/2 w2 + C¯δ w2 , ∀ δ > 0, and |(Gκ,L (u, v), wt )| ≤ μ wt 2 +
2(κ εL + δ )2 ˜ 1/2 2 B w +Cδ ,μ w2 , ∀ μ , δ > 0. μ
Now, as in the proof of Theorem 1.6.18, we can use Lyapunov-type functional
Ψ (w, wt ) =
1 wt 2 + ν B˜ 1/2 w2 + κw2 + η (w, wt ), 2
with κ large, κ εL small and with an appropriate choice of μ and δ .
1.6 Synchronization in Elastic/Wave Structures
109
Remark 1.6.22 (Complete Replacement Synchronization). With reference to Remark 1.6.19 we note that in the situation of identical subsystems described in Theorem 1.6.21 the response system has the form u¯tt + ν B˜ u¯ + Du¯t + κK u¯ = F˜1 (u) ¯ + κKv(t). Therefore, the difference w = u − u¯ satisfies equation (1.6.45) with u¯ instead v in the expression for Gκ,L (u, v) and with κ instead 2κ in all calculations. Therefore, under the conditions of Theorem 1.6.21 with appropriate κ∗ and ε∗ (κ) we observe in system (1.6.44) with finite-dimensional coupling complete replacement synchronization with v as a synchronizing coordinate. Owing to the symmetry between u and v we can also choose u as a synchronizing coordinate.
1.6.6 Applications In this section we outline possible applications of the general results presented above concerning the second order in time models.
Plate Models We first consider plate models with coupling via elastic (Hooke-type) links. Namely, we consider the following PDEs utt + γ1 ut + Δ 2 u + κK(u − v) = f˜1 (u) + g1 in O ⊂ R2 , vtt + γ2 vt + Δ 2 v + κK(v − u) = f˜2 (u) + g2 in O ⊂ R2 ,
(1.6.46)
with the hinged boundary conditions u = Δ u = 0, v = Δ v = 0 on ∂ O, where K is an operator that will be specified later and O is assumed to be bounded and smooth. The nonlinear force term f˜i (u) can take one of the following forms: • Kirchhoff model: f˜i ∈ Liploc (R) fulfills the conditions lim inf |s|→∞
− f˜i (s) > −λ12 , s
− s f˜i (s) ≥ a
s 0
− f˜i (ξ )d ξ − b, ∀s ∈ R,
(1.6.47)
where λ1 is the first eigenvalue of the Laplacian with the Dirichlet boundary conditions, and a and b are positive constants. One can see that both conditions in (1.6.47) are in force if we assume that f˜i ∈ C1 (R) and ∃ s0 ≥ 0 : s f˜i (s) ≤ 0 and f˜i (s) ≤ 0 for all |s| ≥ s0 .
(1.6.48)
110
1 Synchronization of Global Attractors and Individual Trajectories
• Von K´arm´an model: − f˜i (u) = [u, v(u) + F0 ], where F0 is a given function in H 4 (O) and the bracket [u, v] is given by [u, v] = ∂x21 u · ∂x22 v + ∂x22 u · ∂x21 v − 2 · ∂x21 x2 u · ∂x21 x2 v. The Airy stress function v(u) solves the following elliptic problem
Δ 2 v(u) + [u, u] = 0 in O,
∂ v(u) = v(u) = 0 on ∂ O. ∂n
Von K´arm´an equations are well known in nonlinear elasticity and describe nonlinear oscillations of a plate accounting for large displacements (see Chueshov/ Lasiecka [47], Lions/Magenes [111] and the references therein). • Berger model: In this case, the feedback force has the form
˜fi (u) = a |∇u|2 dx − Γ Δ u, O
where a > 0 and Γ ∈ R are parameters (for details and references see Chueshov [36, Chap. 4]). In all these cases we have that H = L2 (O) and ˜ = Δ 2 u, u ∈ D(B) ˜ = u ∈ H 4 (O) : u = Δ u = 0 on ∂ O . Bu It is clear that B˜ satisfies Assumption 1.6.1 (i) and D(B˜ 1/2 ) = H 2 (O) ∩ H01 (O). The nonlinear force − f˜i in the Kirchhoff model is subcritical with respect to the energy space (it is locally Lipschitz from H 1+δ (O) into L2 (O) for every δ > 0). In contrast, the von K´arm´an and Berger nonlinearities are critical (they are locally Lipschitz mappings from H 2 (O) into L2 (O), which are not compact on H 2 (O)). Other requirements concerning the corresponding forcing terms F˜i can be verified in the standard way. For details we refer to Chueshov/Kolbasin [43] for the Kirchhoff forces, to Chueshov/Lasiecka [46, Chapter 6] for the von K´arm´an model, and to Chueshov/Lasiecka [46, Chapter 7] for the case of Berger plates. The interaction operator K can be of the following forms K = id and K = −Δ , or even K = B˜ σ with 0 < σ < 1/2. In the purely Kirchhoff case with globally Lipschitz functions f˜i we can also use the Lagrange interpolation operator with respect to nodes, i.e., with respect to the family of functionals l j (w) = w(x j ), where x j ∈ O, j = 1, . . . , N, with the appropriate19 choice of nodes x j . This means that two Kirchhoff plates can be synchronized by a finite number of point links.
19
For details concerning the smallness of the corresponding completeness defect we refer to Chueshov [36, Chapter 5].
1.6 Synchronization in Elastic/Wave Structures
111
We note that in the case when both f˜1 and f˜2 are Berger nonlinearities (possibly with different parameters) the results on synchronization with K = id can be found in Naboka [121–123]. We also mention other plate models for which the abstract results established can be applied: • First of all, we can consider the plates with other (self-adjoint) boundary conditions, such as clamped and free, and combinations of the two (for a discussion of these boundary conditions in the case of nonlinear plate models, we refer the reader to Chueshov/Lasiecka [47]). • The plate models with rotational inertia can be included in the framework presented. Instead of (1.6.46), coupled dynamics in these models can be described by equations of the form (1 − γΔ )utt + μ (1 − γΔ )ut + Δ 2 u + κK(u − v) = f˜1 (u) + g1 , (1 − γΔ )vtt + μ (1 − γΔ )vt + Δ 2 v + κK(v − u) = f˜2 (u) + g2
(1.6.49)
in a two-dimensional bounded domain O. Here, γ is positive. It is convenient to rewrite (1.6.49) as equations in H = H01 (O) (equipped with the inner product (u, v)H := ((1 − γΔ )u, v)L2 (O) ) in the form (1.6.29) with the operator B˜ generated by the form a(u, v) = (Δ u, Δ v)L2 (O) on H02 (O). • We can also take into consideration plates with Kirchhoff–Boussinesq forces of the form f˜i (u) = div |∇u|2 ∇u − a|u|q u with a, q ≥ 0, see Chueshov/Lasiecka [45, 46, 48] concerning models with this force.
Coupled Wave Equations In the case of coupled wave equations (1.6.15) concerning the source terms f˜i ∈ C2 (R) in addition to (1.6.48) we assume that in the 3D case that | f˜i (s)| ≤ C(1 + |s|), s ∈ R. This condition can be relaxed in several directions. We refer the reader to Chueshov/Lasiecka [46, Chapter 5] for details concerning wave dynamics. We can also consider several versions of damped sine-Gordon equations. These are used to model the dynamics of Josephson junctions driven by a source of current
112
1 Synchronization of Global Attractors and Individual Trajectories
(see, for example, Temam [161] for comments and references). For instance, we can consider the system20 utt + γ ut − Δ u + β u + κ(u − v) = −λ sin u + g(x), vtt + γ vt − Δ v + β v + κ(v − u) = −λ sin v + g(x),
(1.6.50)
in a smooth domain O ⊂ Rd and equipped with the Neumann boundary conditions
∂ u ∂ v = 0, = 0. ∂ n ∂O ∂ n ∂O We assume that γ > 0 and κ ≥ 0, g ∈ L2 (O). It is easy to see that in the case of the Dirichlet boundary conditions the general theory developed above can be applied. The same is true when β > 0. In the case β = 0 the corresponding operator B˜ is −Δ on the domain % ˜ = u ∈ H 2 (O) : ∂ u = 0 on ∂ O D(B) ∂n and thus becomes degenerate. We concentrate on the case β = 0. As in the parabolic case (see Sect. 1.4.2), we introduce new variables w=
u+v u−v and z = . 2 2
(1.6.51)
In these variables, the problem in (1.6.50) with β = 0 can be written in the form wtt + γ wt − Δ w + 2κw = −λ sin w cos z, ztt + γ zt − Δ z = −λ cos w sin z + g(x), ∂ w ∂ z = 0, = 0. ∂ n ∂O ∂ n ∂O
(1.6.52)
The main linear part of the first equation of (1.6.52) is not degenerate when κ > 0. Therefore, the same calculations as in the proof of Theorem 1.6.18 show that there exists κ∗ such that ∃ η > 0 : w(t)2H 1 (O) + wt (t)2 ≤ CB e−η t , t > 0, when κ ≥ κ∗ for all initial data from a bounded set B in H 1 (O) × L2 (O). This means that every trajectory is asymptotically synchronized. Moreover, it follows from the reduction principle (see Chueshov [40, Section 2.3.3]) that the limiting (synchronized) dynamics is determined by the single equation ztt + γ zt − Δ z = −λ sin z + g(x),
20
∂ z = 0. ∂ n ∂O
For simplicity we discuss a symmetric coupling of identical systems only.
(1.6.53)
1.6 Synchronization in Elastic/Wave Structures
113
The long time dynamics of this equation is described in Temam [161, Chapter 4]. In particular, it is shown that there exists an invariant set A ⊂ H 1 (O) × L2 (O) of the form A = (φ0 + ρ , φ1 ) : (φ0 , φ1 ) ∈ G, ρ ∈ R , where G is a closed bounded set in the space
(φ0 , φ1 ) ∈ H 1 (O) × L2 (O) :
O
φ0 (x)dx = 0 ,
such that for every solution z(t) to (1.6.53) we have # $ ¯z(t) − ψ¯ 0 H 1 (O) zt (t) − ψ1 L2 (O) + |m(z(t) − ψ0 ) − 2π k| → 0 inf (ψ0 ,ψ1 )∈A,k∈Z
when k → ∞. Here, we use the notation m(φ ) =
1 |O|
O
φ (x)dx, φ¯ = φ − m(φ ) ∀ φ ∈ L1 (O).
We also refer the reader to Leonov/Kuznetsov [107, Chapter 6], Leonov/Reitmann/Smirnova [109], Leonov/Smirnova [108] for studies on synchronization phenomena for (1.6.50) in the homogeneous (ODE) case. Another coupled sine-Gordon system of interest is the following: utt + γ ut − Δ u = −λ sin(u − v) + g1 (x), vtt + γ vt − Δ v = −λ sin(v − u) + g2 (x), u|∂ O = 0, v|∂ O = 0,
(1.6.54)
where γ , λ > 0 and gi ∈ L2 (O). Formally, this model is outside the scope of the theory developed above. However, using the ideas presented we can answer some questions concerning synchronized regimes.21 In variables w and z given by (1.6.51), we have equations wtt + γ wt − Δ w = −λ sin 2w + g(x), ztt + γ zt − Δ z = h(x),
(1.6.55)
w|∂ O = 0, z|∂ O = 0, where g(x) = (g1 (x) − g2 (x))/2 and h(x) = (g1 (x) + g2 (x))/2. One can see that zt (t)2 + ∇(z(t) − z∗ )2 ≤ C zt (0)2 + ∇(z(0) − z∗ )2 e−η t , t ≥ 0,
21
In the ODE case the synchronization phenomena in (1.6.54) were studied in Leonov/Reitmann/Smirnova [109] and Leonov/Smirnova [108].
114
1 Synchronization of Global Attractors and Individual Trajectories
for some C, η > 0, where z∗ ∈ H 1 (O) solves the Dirichlet problem −Δ z = h(x), z|∂ O = 0. The problem given by the first equation of (1.6.55) equipped with the Dirichlet boundary conditions possesses a global attractor A in the space H01 (O) × L2 (O), see Temam [161, Chapter 4]. Hence, ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ z(t) − z∗ w(t) u(t) − z∗ % ⎜ ut (t) ⎟ ⎜ zt (t) ⎟ ⎜ wt (t) ⎟ ψ ⎜ ⎟=⎜ ⎟+⎜ ⎟ −→ : ψ ∈ A as t → +∞ ⎝v(t) − z∗ ⎠ ⎝z(t) − z∗ ⎠ ⎝ −w(t) ⎠ −ψ vt (t) zt (t) −wt (t) in the space H01 (O) × L2 (O) × H01 (O) × L2 (O) uniformly with respect to initial data from bounded sets in H01 (O) × L2 (O) × H01 (O) × L2 (O). Moreover, since the first equation of (1.6.55) generates a gradient system (see Temam [161, Chapter 7]), Theorem 1.2.12 implies that ⎫ ⎛ ⎞ ⎧⎛ ⎞ φ u(t) − z∗ ⎪ ⎪ 2 ⎪ ⎪ φ ∈ H (O), ⎬ ⎜ ut (t) ⎟ ⎨ ⎜ 0 ⎟ ⎜ ⎟ → ⎜ ⎟ −Δ φ + λ sin 2φ = g(x) in O, as t → +∞ ⎝v(t) − z∗ ⎠ ⎪ ⎝−φ ⎠ ⎪ ⎪ ⎪ φ | = 0. ⎭ ⎩ O vt (t) 0 for every individual solution (u(t), v(t)) to (1.6.54). Thus, we observe some kind of shifted asymptotic anti-phase synchronization.
Chapter 2
Master–Slave Synchronization via Invariant Manifolds
2.1 Introduction In this chapter we deal with master–slave synchronization and apply the ideas from the theory of invariant and inertial manifolds. As was already mentioned in the Introduction, from a mathematical point of view, the synchronization phenomena for individual trajectories of the coupled system Ut =F1 (U,V ), t > 0, in X1 , Vt =F2 (U,V ), t > 0, in X2 ,
(2.1.1)
can be treated as the existence of an invariant manifold in the phase space of the coupled system of the form M = {(U,V ) ∈ X1 × X2 : U = m(V ) ∈ X1 , V ∈ X2 } ,
(2.1.2)
where m : X2 → X1 is a Lipschitz mapping. This observation leads to the following definition. Definition 2.1.1 (Master–Slave Synchronization). The first equation of (2.1.1) is said to be (asymptotically) synchronized by the second equation of (2.1.1), if there exists a Lipschitz mapping m : X2 → X1 such that the set M given by (2.1.2) is invariant and (2.1.3) lim U(t) − m(V (t))X1 = 0 t→+∞
for any solution (U(t),V (t)) to problem (2.1.1). In this case, the second equation in (2.1.1) is called the master equation and the first system in (2.1.1) is the slave system.
© Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3 2
115
116
2 Master–Slave Synchronization via Invariant Manifolds
We can also introduce the notion of partial synchronization by saying that for a given solution (U,V ), we observe master–slave synchronization if there exists a function m : X2 → X1 such that U(t) = m(V (t)). However, below, we mainly deal with the global asymptotic synchronization properties, which means that the relation in (2.1.3) holds true for all solutions to (2.1.1) with a universal function m. To illustrate the notions introduced above, we consider the following 2D ODE system u˙ = m (v) f (u, v) − μ u − m(v) , (2.1.4) v˙ = f (u, v), where f (u, v) is a globally Lipschitz function on R2 , m is a smooth function on R with the bounded derivative m , and μ ≥ 0 is a parameter. We can see that (2.1.4) generates a dynamical system on R2 . A simple calculation shows that d u(t) − m(v(t)) + μ u(t) − m(v(t)) = 0 dt for every solution to (2.1.4). This implies that the curve M = {(m(v), v) : v ∈ R} ⊂ R2 is invariant for every μ ≥ 0. In the case μ > 0, we have that |u(t) − m(v(t))| ≤ e−μ t |u(0) − m(v(0))|, t ≥ 0. Thus, we observe a global asymptotic (master–slave) synchronization in (2.1.4). If μ = 0, then we have u(t) = c0 + m(v(t)) for every solution (u(t), v(t)), where the constant c0 is determined by the initial data of this solution, c0 = u(0) − m(v(0)). Thus, in the case μ = 0, every solution is synchronized, but with its own functional dependence. Our approach in this chapter is based on the idea of inertial manifolds. We note that a concept of an inertial manifold (see Foias/Sell/Temam [79]) was motivated by a desire to reduce the study of the system to the investigation of properties of a finite-dimensional ODE system (see also Daletsky/Krein [67], Henry [92], Mitropolsky/Lykova [116] for a related idea of a central-stable manifold and reduction principle). These manifolds (see Foias/Sell/Temam [79]), Chueshov [36], Constantin et al. [62], Temam [161] and the references therein) are finite-dimensional invariant manifolds that contain a global attractor and attract trajectories exponentially fast. Moreover, there is the possibility to reduce the study of limit regimes of the original infinite-dimensional system to solve similar problems for a class of ODEs.
2.1 Introduction
117
The theory of inertial manifolds has been developed and widely studied for deterministic1 systems by many authors (see, for example, Chow/Lu [31], Chueshov [32, 36], Constantin et al. [62], Foias/Sell/Temam [79], Mallet-Paret/Sell [112], Miklavˇciˇc [113], Mora [117], Temam [161] and the references therein). A typical condition required by all known results concerning the existence of inertial manifolds is some sort of a gap condition imposed on the spectrum of the linearized problem. In fact, we deal with a class of spectrally separated (in some sense) interacting systems. There are two approaches to constructing invariant manifolds: the Hadamard graph transform method (see, for example, Constantin et al. [62] and also Bates/Jones [10], Romanov [142]) and the Lyapunov–Perron method (see, for example, Chow/Lu [31], Chueshov [36], Foias/Sell/Temam [79], Latushkin/Layton [105], Miklavˇciˇc [113], Temam [161]). Below, we present and discuss both of them as applied to synchronization phenomena in coupled systems. We note that the idea of general invariant manifolds had already been used in the study of synchronization of ODE systems (see, for example, Josi´c [97], Chow/Liu [30], Sun/Bollt/Nishikawa [160]). Related to the master–slave effect discussed above, we also recall the socalled complete replacement (or drive-response) synchronization attributed to Pecora/Carroll [128] (see the discussion in the Introduction and in Sect. 1.3.4). Definition 2.1.2 (Drive-Response Synchronization). Let Y (t) = (U(t),V (t)) be a solution to (2.1.1) for some initial data Y0 = (U0 ,V0 ). Then we define the so-called (Y0 ,U)-subordinate (non-autonomous) system ¯ (t)), t > 0, in X1 , U¯t = F1 (U,V
(2.1.5)
with some initial data U¯ 0 , where V (t) is the second component of the reference solution Y (t). According to Pecora/Carroll [128] the system in (2.1.1) demonstrates the drive-response (or complete replacement) synchronization if for every initial data Y0 = (U0 ,V0 ) and U¯ 0 we have that ¯ lim U(t) − U(t)) X1 = 0.
t→+∞
(2.1.6)
In this case, the variable V is called the synchronizing coordinate. Moreover, in this situation, (2.1.1) is called the drive system and (2.1.5) is said to be the response system. It seems that the drive-response synchronization does not follow from the master– slave synchronization described above. Nevertheless, as we will see, conditions that we need for the master–slave synchronization usually imply the Pecora–Carroll phenomenon. Moreover, for all systems discussed in this chapter, the conditions that guarantee drive-response synchronization are weaker than those we need for the
1
For a stochastic case we refer the reader to the discussion in Chap. 4.
118
2 Master–Slave Synchronization via Invariant Manifolds
master–slave effect via the inertial manifolds theory. We also note that in Chap. 1, drive-response synchronization was mainly referred to as complete replacement synchronization. This is due to some symmetry between U and V variables. In many cases, this allows us to choose either V or U. This is not true in the present chapter and therefore we prefer to emphasize the drive-response character of this phenomenon in order to make its relationship with master–slave synchronization more obvious. In this chapter we distinguish several types of models depending on properties of their main parts. In Sect. 2.2 we consider the semilinear case when the main part is linear and generates a strongly continuous semigroup with appropriate dichotomy estimates. We also assume that nonlinearities are sufficiently smooth and globally Lipschitz (the later requirement is standard in most global considerations of invariant manifolds). Then in Sect. 2.3 we switch to the models with a nonlinear main part. We introduce the notion of nonlinear dichotomy which allows us to avoid a global Lipschitz assumption concerning nonlinearity in the model and obtain a result on synchronization. Our applications include systems consisting of (i) parabolic and hyperbolic equations, (ii) two hyperbolic equations, and (iii) Klein– Gordon and Schr¨odinger equations. However, in both cases mentioned we need some regularity hypotheses concerning either the nonlinear terms or the dynamics of the system. We relax these hypotheses in Sect. 2.4 where we concentrate on coupled parabolic–hyperbolic systems, which may contain nonlinear singular terms. Main motivating examples are related to thermoelastic phenomena in a continuum medium. Parabolic–hyperbolic systems coupled on the boundary are also included in this scheme.
2.2 Semilinear Case (Linear Dichotomy) In this section, we present and apply the standard idea of inertial manifolds theory to obtain the assumptions of master–slave synchronization. We follow the idea of the Lyapunov–Perron method in the form presented in Miklavˇciˇc [113] and consider the simplest semilinear case. Our goal is to present clearly the main idea of the Miklavˇciˇc method.
2.2.1 Main Hypotheses and Generation of a Dynamical System We consider the following system of differential equations Ut +A˜ 1U = F1 (U,V ), t > 0, in X1 , Vt +A˜ 2V = F2 (U,V ), t > 0, in X2 , and assume the following hypotheses.
(2.2.1)
2.2 Semilinear Case (Linear Dichotomy)
119
Assumption 2.2.1. Let X1 and X2 be two Banach spaces. We assume that 1. A˜ 1 is a generator of a linear strongly continuous semigroup S1 on X1 , which satisfies the estimate S1 (t) := S1 (t)L(X1 ) ≤ M1 exp{−γ1t}, t ≥ 02
(2.2.2)
for some constants M1 ≥ 1, γ1 > 0. Similarly, A˜ 2 is the generator of a linear strongly continuous group S2 on X2 satisfying the estimate S2 (t)L(X2 ) ≤ M2 exp{−γ2t}, t ≤ 0,
(2.2.3)
for some constant M2 ≥ 1, γ2 ≥ 0. 2. F1 and F2 are nonlinear mappings, F1 : X1 × X2 → X1 ,
F2 : X1 × X2 → X2 ,
and there exist constants L1 and L2 such that 1/2 F1 (U1 ,V1 ) − F1 (U2 ,V2 )X1 ≤ L1 U1 −U2 2X1 + V1 −V2 2X2
(2.2.4)
1/2 F2 (U1 ,V1 ) − F2 (U2 ,V2 )X2 ≤ L2 U1 −U2 2X1 + V1 −V2 2X2 .
(2.2.5)
and
Below we consider the space X = X1 × X2 equipped with the norm 1/2 Y X = U2X1 + V 2X2 ,
Y = (U,V ),
and denote by Q and P the orthoprojections on X onto the first and second components, i.e., Q(U,V ) = (U, 0) U ∈ X1
and
P(U,V ) = (0,V ) V ∈ X2
(2.2.6)
for (U,V ) ∈ X. We will not distinguish the notations for the elements Q(U,V ) = (U, 0) and U, i.e., we consider U as an element from X1 or as an element from the corresponding subspace of X depending on the context. The same is true for P(U,V ) and V and also for (S1 (t)U, 0), S1 (t)U and (0, S2 (t)V ), S2 (t)V . Remark 2.2.2. If γ1 > γ2 , then the properties (2.2.2) and (2.2.3) ensure the so-called dichotomy estimates for the linear strongly continuous semigroup S on X given by 1 S (t) 0 S(t) = , t ≥ 0, (2.2.7) 0 S2 (t) in the space X with respect to the pair of projectors Q and P given by (2.2.6). 2
Here and below we denote by · L(X) the operator norm of linear operators on X.
120
2 Master–Slave Synchronization via Invariant Manifolds
Since S2 is a strongly continuous (semi)group, we can guarantee (see Pazy [127, p. 4]) the existence of constants M˜ 2 ≥ 1 and γ˜2 ≥ 0 such that S2 (t) ≤ M˜ 2 exp{γ˜2t}, t ≥ 0. These constants M˜ 2 and γ˜2 play some auxiliary rˆole and do not enter our main results. The assumption γ2 ≥ 0 could also be relaxed. However, it seems the case that γ2 < 0 has no substantial physical meaning. The corresponding analysis requires some special considerations. This is why we assume that γ2 ≥ 0 from the very beginning. We also note that the requirement that S2 is a group cannot be relaxed in the approach developed. This means that we are not able to apply our basic result (see Theorem 2.2.6 below) in the case of two coupled parabolic equation. The main reason is that the backward time estimate in (2.2.2) cannot be obtained in the case when the master equation is parabolic. In the purely parabolic case, the approach to synchronization was presented in the previous chapter. Our approach here is alternative in some sense and covers another, in comparison with Chap. 1, kind of problem. The models in this section assume that the nonlinearities are globally Lipschitz. In the case of deterministic dissipative systems this requirement can be avoided if we restrict ourselves by dynamics in the forward invariant absorbing ball within which the nonlinearities can be truncated (see, for example, Temam [161] for some examples of this procedure). With this Assumption 2.2.1, we can rewrite system (2.2.1) as a single first-order equation in the space X = X1 × X2 on the interval [0, ∞) Yt + AY = F(Y ), t > 0, where Y (t) = (U(t),V (t)) and A˜ 1 0 A= , 0 A˜ 2
Y (0) = Y0 ∈ X,
F(Y ) =
(2.2.8)
F1 (U,V ) . F2 (U,V )
Obviously, the operator A is the generator of the linear strongly continuous semigroup (2.2.7) on X. As above, we denote by C([a, b], X) the space of continuous functions on [a, b] with values in X. Definition 2.2.3. Let Assumption 2.2.1 be in force. Assume that T > 0, Y0 ∈ X. A function Y (t) = Y (t,Y0 ), which belongs to the space C([0, T ], X) is said to be a mild solution to problem (2.2.8) on the interval [0, T ] if Y (0) = Y0 ∈ X and t
Y (t) = S(t)Y0 + for all t ∈ [0, T ].
0
S(t − τ )F(Y (τ ))d τ
(2.2.9)
2.2 Semilinear Case (Linear Dichotomy)
121
We have the following result about the existence and uniqueness of mild solutions to (2.2.8). Theorem 2.2.4. Let Assumption 2.2.1 be in force. Then, for every Y0 ∈ X and T > 0 the problem in (2.2.8) has a unique mild solution Y (t) on the interval [0, T ]. Then we can define the family of mappings φ (t) : X → X for t ∈ R+ by the formula φ (t,Y0 ) =: Y (t,Y0 ). These mappings generate a (continuous) dynamical system φ on X. Proof. As in the proof of Theorem 1.3.3 we apply the standard fixed point argument to prove the existence of a unique mild solution on small intervals and then extend a solution step by step on any interval [0, T ]. The global Lipschitz conditions imposed on F1 and F2 allow us to make this extension. For some detail, see Pazy [127, Theorem 1.2, p. 184]. The structure of the system in (2.2.1) and also the well-posedness result in Theorem 2.2.4 make it possible to give conditions that guarantee drive-response synchronization (see Definition 2.1.2) in the system. Namely, we have the following assertion. Proposition 2.2.5 (Drive-Response Synchronization). Assume that Assumption 2.2.1 is in force and
γ1 > M1 L1 .
(2.2.10)
Then, in system (2.2.1) we observe the drive-response synchronization. More precisely, for every mild solution Y (t) = (U(t),V (t)) to (2.2.1) with some initial data Y0 = (U0 ,V0 ), the following properties hold: • The non-autonomous problem ¯ (t)), t > 0, U(0) ¯ U¯ t + A˜ 1U¯ = F1 (U,V = U¯ 0 , in X1 ,
(2.2.11)
has a unique mild solution for every U¯ 0 ∈ X1 ; ¯ • The solution U(t) is exponentially close to the U-component of the solution Y (t) = (U(t),V (t)) in the sense that −(γ1 −M1 L1 )t ¯ U0 − U¯ 0 X1 , t > 0, U(t) − U(t)) X1 ≤ M1 e
for an arbitrary choice of the initial data Y0 = (U0 ,V0 ) ∈ X and U¯ 0 ∈ X1 . ¯ := F1 (U,V ¯ (t)) is globally Lipschitz on X1 uniformly in t, the Proof. Since F¯1 (U,t) integral equation ¯ = S1 (t)U¯ 0 + U(t)
t 0
¯ τ ),V (τ ))d τ S1 (t − τ )F1 (U(
has a unique solution from the class C([0, T ], X1 ) for every interval [0, T ]. It follows ¯ from (2.2.9) that Z(t) = U(t) − U(t) solves the equation Z(t) = S1 (t)[U0 − U¯ 0 ] +
t 0
¯ τ ),V (τ )) d τ . S1 (t − τ ) F1 (U(τ ),V (τ )) − F1 (U(
122
2 Master–Slave Synchronization via Invariant Manifolds
Therefore, using (2.2.2) and (2.2.4), we obtain that Z(t)X1 ≤ M1 e−γ1 t U0 − U¯ 0 X1 + M1 L1
t 0
e−γ1 (t−τ ) Z(τ )X1 d τ .
Thus, Gronwall’s argument (see Theorem 1.2.24) gives the conclusion in Proposition 2.2.5. As we can see from the proof of Proposition 2.2.5, it is not really important for calculations of the properties of the second equation. Instead of (2.2.1) we can even consider a system of the form Ut + A˜ 1U = F1 (U,V ), t > 0, in X1 , Vt = F2 (U,V ), t > 0, in X2 ,
(2.2.12)
where A˜ 1 and F1 satisfy the hypotheses in Assumption 2.2.1 and F2 : X1 × X2 → X2 is a mapping such that problem (2.2.12) is well-posed in some sense, which allows us to understand the first equation of (2.2.12) in the mild form.
2.2.2 The Basic Idea of the Lyapunov–Perron Method Now we describe the main idea behind the Lyapunov–Perron method of construction of invariant manifolds for coupled systems like (2.2.1). Our first task is to obtain the main equations of the method at the formal level. Assume that the system generated by (2.2.1) possesses an invariant manifold M of the form (2.1.2) with a bounded smooth function m : X2 → X1 . Then, the function Y (t) = (m(V (t)),V (t)) is a mild solution to (2.2.8), i.e., it satisfies the equation t
Y (t) = S(t)Y (s) + s
S(t − τ )F(Y (τ ))d τ
(2.2.13)
for all t ≥ s. Indeed, instead of considering this equation on [0, T ], we can assume that t ∈ [s, T ]. Taking the projection on the first component we obtain that m(V (t)) = S1 (t − s)m(V (s)) +
t s
S1 (t − τ )F1 (Y (τ ))d τ .
Since S1 is exponentially stable (see (2.2.2)) and m(V ) is bounded, in the limit s → −∞ we obtain that t
m(V (t)) =
−∞
S1 (t − τ )F1 (Y (τ ))d τ
2.2 Semilinear Case (Linear Dichotomy)
123
for every t ∈ R. This implies that 0
m(V0 ) =
−∞
S1 (−τ )F1 (Y (τ ))d τ , ∀V0 ∈ X2 ,
(2.2.14)
where Y (t) satisfies (2.2.13) for t < 0 with Y (0) = (m(V0 ),V0 ) and with arbitrary s < t. To simplify the equation for Y it is also convenient to make the limit transition s → −∞ in (2.2.13). However, we cannot do this directly because there is no exponential stability in the second variable. This difficulty can be overcome in the following way, which relies on the possibility to solve the equation in the space X2 in the backward direction. We first note that Y (t) = QY (t) + PY (t) = (U(t),V (t)), where Q and P have been defined in (2.2.6). It is clear that U(t) = S1 (t − s)m(V (s)) +
t s
S1 (t − τ )F1 (Y (τ ))d τ .
Therefore, as above, in the limit s → −∞ we obtain t
U(t) =
−∞
S1 (t − τ )F1 (Y (τ ))d τ , ∀t < 0.
The equation for the V component leads to the relation V (0) = S2 (−t)V (t) +
0 t
S2 (−τ )F2 (Y (τ ))d τ , ∀t < 0.
This formula expresses the solution at the moment 0 via the data at the moment t < 0. Since S2 is a group, we obtain that V (t) = S2 (t)V0 −
0 t
S2 (t − τ )F2 (Y (τ ))d τ , ∀t < 0.
These observations concerning U and V show that the function Y (t) = (U(t),V (t)) on the semi-axis (−∞, 0] solves the integral equation Y (t) = S2 (t)V0 −
0 t
PS(t − τ )F(Y (τ ))d τ +
t −∞
QS(t − τ )F(Y (τ ))d τ
(2.2.15)
for all t < 0. Thus, to obtain the function m determining the invariant manifold in (2.1.2) we should first solve (2.2.15) and then put the solution Y (t) into the formula in (2.2.14). This is the essence of the Lyapunov–Perron method. In the subsequent considerations we explain how to solve (2.2.15) and justify the formal scheme described above using the Miklavˇciˇc idea.
124
2 Master–Slave Synchronization via Invariant Manifolds
2.2.3 Existence of a Synchronization (Invariant) Manifold In this section, we use the Lyapunov–Perron method to prove that under some conditions the dynamical system generated by (2.2.1) has an invariant exponentially attracting manifold of the type (2.1.2). The key result is the following assertion. Theorem 2.2.6. Suppose that Assumption 2.2.1 holds and
γ 1 − γ2 >
2 √ √ M2 L2 + M1 L1 .
(2.2.16)
Then, there exists a mapping m(·) : X2 → X1 such that m(V1 ) − m(V2 )X1 ≤ CV1 −V2 X2 ,
(2.2.17)
for all V1 , V2 ∈ X2 and some constant C > 0. Moreover, the manifold M = {(m(V ),V ) : V ∈ X2 } ⊂ X = X1 × X2 , is (strictly) invariant with respect to φ (φ (t, M ) = M ) and exponentially attracting in the following sense: there exists a constant R0 > 0 such that for any Y0 ∈ X we can find Y ∗ ∈ M such that
∞
2μ t
e 0
φ (t,Y0 ) − φ (t,Y
∗
1/2 )2X
dt
≤ R0 (1 + Y0 X )
(2.2.18)
and also φ (t,Y0 ) − φ (t,Y ∗ )X ≤ R0 e−μ t (1 + Y0 X ) ,
t > 0.
(2.2.19)
Here, we suppose √ √ M2 L2 γ1 + M1 L1 γ2 √ √ μ= . M2 L2 + M1 L1
(2.2.20)
Moreover, for every (U(t),V (t)) solving (2.2.1), we have for some positive R0 the estimate U(t) − m(V (t))X1 ≤ R0 e−μ t (1 + U(0)X1 + V (0)X1 ) ,
t > 0.
(2.2.21)
We note that the relation in (2.2.21) means that we observe synchronization of variables U and V with exponential speed for every solution Y = (U,V ). Moreover, the dichotomy gap condition in (2.2.16) implies the relation (2.2.10). Thus, under Assumption 2.2.1 by Proposition 2.2.5, we also have the drive-response synchronization in the system in (2.2.1) with V as a synchronizing coordinate. Therefore, in our case, the master–slave synchronization in the sense of Definition 2.1.1 requires stronger hypotheses concerning the system than the drive-response synchronization in the sense of Definition 2.1.2. To prove Theorem 2.2.6, we proceed in several steps.
2.2 Semilinear Case (Linear Dichotomy)
125
Construction of the Inertial Manifold We apply the Lyapunov–Perron procedure in the form suggested in Miklavˇciˇc [113]. We consider the space X = Y : eμ t Y (t) ∈ L2 (−∞, 0, X) , where μ ∈ (γ2 , γ1 ) (below, we choose μ according to (2.2.20)). On this space, we introduce the norm |Y |X :=
0
−∞
e2μ t Y (t)2X dt
1/2 .
In order to construct an invariant manifold using the Lyapunov–Perron method, we first solve the integral equation Y = TV0 [Y ]
on X ,
(2.2.22)
where V0 ∈ X2 and TV0 [Y ] := LV0 [F(Y )]. Here, LV0 [Y ] is defined on X by LV0 [Y ](σ ) = S2 (σ )V0 − +
σ
−∞
0 σ
S2 (σ − τ )PY (τ )d τ (2.2.23)
S (σ − τ )QY (τ )d τ , ∀ σ ∈ (−∞, 0]. 1
The operator V0 → LV0 [Y ](σ ) is affine and we obviously have that LV0 [Y ](σ ) = S2 (σ )V0 + L0 [Y ](σ ). To solve (2.2.22) we use the fixed point method based on the following assertion. Proposition 2.2.7. Let γ2 < μ < γ1 . Then, for every V0 ∈ X2 the operator TV0 [·] maps X onto itself and |TV01 [Y1 ] − TV02 [Y2 ]|X ≤ cV01 −V02 X2 + κ (μ ) · |Y1 −Y2 |X , for every V01 ,V02 ∈ X2 and Y1 ,Y2 ∈ X , where M2 L2 M1 L1 + . μ − γ2 γ 1 − μ
κ (μ ) =
To prove this proposition we need Lemma 2.2.8 and Proposition 2.2.9. Lemma 2.2.8. Let f ∈ L2 (R) and δ > 0. Then, I1 ( f )(t) :=
∞ t
t
I2 ( f )(t) :=
−∞
eδ (t−τ ) f (τ )d τ ∈ L2 (R),
e−δ (t−τ ) f (τ )d τ ∈ L2 (R),
(2.2.24)
126
2 Master–Slave Synchronization via Invariant Manifolds
and R
|Ii ( f )(t)|2 dt ≤
1 δ2
R
| f (t)|2 dt,
i = 1, 2.
(2.2.25)
Proof. We can see that I2 ( f )(t) = I1 ( f− )(−t), where f− (t) = f (−t). Therefore, it is sufficient to deal with I1 only. Let 0, if t > 0; e(t) = exp{δ t}, if t ≤ 0. Then, I1 ( f )(t) =
R e(t − τ ) f (τ )d τ
I 1 ( f )(ω ) =
:= (e ∗ f )(t) is the convolution and, therefore,
√ 2π · e( ˆ ω ) · fˆ(ω ) = (δ − iω )−1 · fˆ(ω ),
ω ∈ R,
ˆ ω ) = √1 where h( e−iω t h(t)dt is the Fourier transformation of h. Hence, the 2π R Plancherel formula implies (2.2.25). In particular, we have 1 2 1 1 δ − iω = δ 2 + ω 2 ≤ δ 2 for all ω ∈ R. Proposition 2.2.9. For every V0 ∈ X2 the operator LV0 given by (2.2.23) is a continuous mapping from X onto itself and M2 · |PY1 − PY2 |X for any Y1 ,Y2 ∈ X μ − γ2
|LV0 [PY1 ] − LV0 [PY2 ]|X ≤
(2.2.26)
and |LV0 [QY1 ] − LV0 [QY2 ]|X ≤
M1 · |QY1 − QY2 |X for any Y1 ,Y2 ∈ X . (2.2.27) γ1 − μ
Proof. Since LV0 [Y1 ] − LV0 [Y2 ] = L0 [Y1 −Y2 ], to obtain (2.2.26) and (2.2.27), we need only to estimate |L0 [PY ]|X and |L0 [QY ]|X for any Y ∈ X . We can see that eμσ L0 [PY ](σ )X ≤ M2
0 σ
e(μ −γ2 )(σ −τ ) · eμτ PY (τ )X d τ , σ ≤ 0.
(2.2.28)
Therefore, applying the estimate given for I1 in Lemma 2.2.8 with δ = μ − γ2 and f (t) defined by the relation: f (t) = eμ t Y (t)X for t ≤ 0 and f (t) = 0 for t > 0, we obtain that |L0 [PY ]|X ≤
M2 · |PY |X μ − γ2
for any Y ∈ X .
(2.2.29)
2.2 Semilinear Case (Linear Dichotomy)
127
In a similar way, Lemma 2.2.8 for I2 yields that |L0 [QY ]|X ≤
M1 · |QY |X γ1 − μ
for any Y ∈ X .
(2.2.30)
Relations (2.2.29) and (2.2.30) imply (2.2.26) and (2.2.27). The continuity of the mapping LV0 follows from (2.2.26) and (2.2.27) and from the relation |LV0 [Y1 ] − LV0 [Y2 ]|2X = |LV0 [PY1 ] − LV0 [PY2 ]|2X + |LV0 [QY1 ] − LV0 [QY2 ]|2X . Let μ be given by (2.2.20), which minimizes (2.2.24). In this case, √ 2 √ M2 L2 + M1 L1 κ (μ ) = γ 1 − γ2 such that κ (μ ) < 1 under condition (2.2.16). Then, TV0 [·] is a contraction in X and hence (2.2.22) has a unique solution YV0 in the space X for every V0 ∈ X2 . Using (2.2.22), one can show that this solution YV0 possesses the properties YV0 ∈ C((−∞, 0], X), and sup eμ t YV01 (t) −YV02 (t)X ≤ CV01 −V02 X2
(2.2.31)
t≤0
for any V01 ,V02 ∈ X2 , where C is a positive constant. We define m : X2 → X1 as 0
m(V0 ) :=
−∞
S1 (−τ )F1 (YV0 (τ ))d τ = QTV0 [Y ](0).
(2.2.32)
Now we prove that M is forward invariant, i.e., φ (t, M ) ⊂ M . To see this, we define for t ≥ 0 YV0 (σ + t) : σ < −t Y˜ (σ ,t) := Y (σ + t,V0 + m(V0 )) : σ ∈ [−t, 0]. The last expression is the solution to the initial value problem (2.1.1) with initial condition YV0 (0) = V0 + m(V0 ). We can prove Y˜ (·,t) is a fixed point in the sense of (2.2.22) for the initial state PY (t,V0 + m(V0 )). Since t → Y (t,YV0 (0)) is a solution to (2.2.1) with value YV0 (0) at zero we have QY (t ,YV0 (0)) =S1 (t ) t
+ 0
0 −∞
S1 (−τ )F1 (YV0 (τ ))d τ
S1 (t − τ )F1 (Y (τ ,YV0 (0)))d τ .
128
2 Master–Slave Synchronization via Invariant Manifolds
For σ = t − t, σ ≥ −t we obtain that QY˜ (σ ,t) = QY (σ + t,YV0 (0)) =
σ −∞
S1 (σ − τ )F1 (Y˜ (τ ,t))d τ .
For σ < −t, by the fixed point relation of YV0 , we have σ −∞
S1 (σ − τ )F1 (Y˜ (τ ,t))d τ =
σ +t −∞
S1 (σ + t − τ )F1 (YV0 (τ ))d τ = QYV0 (σ + t) = QY˜ (σ ,t).
Hence, by (2.2.23) we have the fixed point property. The P-component of TPY (t,YV (0)) can be studied similarly (see Bessaih et al. [14]). The relation in 0 (2.2.31) implies the Lipschitz property (2.2.17).
Tracking Property We use the same idea in Miklavˇciˇc [113] to prove the tracking property in (2.2.18), (2.2.19) and (2.2.21). Let Y0 = (U0 ,V0 ) ∈ X. We consider the following Banach space % ∞ e2μ t Z(t)2X dt < ∞ Z := Z(·) : R → X : |Z|2Z := −∞
for μ given in (2.2.20). Now let Y (t) = Y (t,Y0 ) be the solution to (2.2.8) for t ≥ 0 and Y (t) = Y0 ∈ X for t ≤ 0. For every Y0 ∈ X, define the function ⎧ for t ≤ 0; ⎨ −Y0 + TPY0 [Y ](t), Z0 (t) = ⎩ S(t)(−Y0 + TPY0 [Y ](0)), for t > 0, where TPY0 is the same as in (2.2.22). One can see that the function Z0 (t) belongs to Z and that there exists a constant R such that |Z0 |Z ≤ R(1 +CY0 X )
(2.2.33)
sup eμ t Z0 (t)X ≤ R(1 +CY0 X ).
(2.2.34)
and also t∈R
Indeed, for t ≥ 0 −Y0 + TPY0 [Y ](0) = Yˆ ∈ X1
2.2 Semilinear Case (Linear Dichotomy)
129
and then ∞ 0
S(t)Yˆ 2X e2μ t dt ≤ M12
∞ 0
e−2(γ1 −μ )t dtYˆ 2X ≤ c(1 + Y0 2X ) < ∞.
For t ≤ 0 we note that Y0 and TPY0 [Y ](·) is in X . Estimating the supremum in (2.2.34) at first for R− , we can apply (2.2.28) in X2 and do similar for X1 . The estimate of the supremum with respect to R+ follows in a straightforward manner, because γ1 > μ . We define an integral operator R : Z → Z by the formula R[Z](t) := Z0 (t) + −
∞ t
t −∞
S1 (t − τ )Q [F(Z(τ ) +Y (τ )) − F(Y (τ ))] d τ
S2 (t − τ )P [F(Z(τ ) +Y (τ )) − F(Y (τ ))] d τ .
Let us prove that R is a contraction in Z . By (2.2.5) and (2.2.3), we have that eμ t P(R[Z1 ](t) − R[Z2 ](t))X2 ≤M2 L2
∞ t
e(μ −γ2 )(t−τ ) eμτ Z1 (τ ) − Z2 (τ )X d τ .
(2.2.35)
By Lemma 2.2.8 with δ := μ − γ2 and f (t) = M2 L2 eμ t Z1 (t) − Z2 (t)X we obtain that M2 L2 · |Z1 − Z2 |Z . |P (R[Z1 ] − R[Z2 ]) |Z ≤ μ − γ2 Similarly, (2.2.4) and (2.2.2) yields eμ t Q (R[Z1 ](t) − R[Z2 ](t)) X ≤ M1 L1
t
−∞
e(μ −γ1 )(t−τ ) eμτ Z1 (τ ) − Z2 (τ )X1 d τ
and thus applying Lemma 2.2.8 again, we have that |Q (R[Z1 ] − R[Z2 ]) |Z ≤
M1 L1 · |Z1 − Z2 |Z . γ1 − μ
If μ is given by (2.2.20), we can write |R[Z1 ] − R[Z2 ]|Z ≤ κ (μ ) · |Z1 − Z2 |Z
for every
Z1 , Z2 ∈ Z ,
(2.2.36)
where κ (μ ) < 1. Thus, by the contraction principle there exists a unique solution Z ∈ Z to the equation Z = R[Z].
130
2 Master–Slave Synchronization via Invariant Manifolds
We can conclude that the function Y (t) = Z(t) + Y (t), where Z ∈ Z solves the equation Z = R[Z], satisfies the relation Y (t) =
⎧ ⎨ TPY(0) [Y ](t), if t ≤ 0; ⎩
φ (t, Y (0)),
if t > 0.
Indeed, for t ≤ 0 we have Y˜ (t) =Z(t) +Y (t) = −Y0 + S2 (t)PY0 − t
+ −
−∞ 0 t
0 t
S2 (t − τ )F2 (Y0 )d τ +
t −∞
S1 (t − τ )F1 (Y0 )d τ
S1 (t − τ )(F1 (Z(τ ) +Y0 ) − F1 (Y0 ))d τ
S2 (t − τ )(F2 (Z(τ ) +Y0 ) − F2 (Y0 ))d τ
− S2 (t)
∞ 0
=S2 (t)PY0 + − S2 (t)
t
∞ 0
S2 (−τ )(F2 (Z(τ ) +Y (τ )) − F2 (Y (τ )))d τ +Y0 −∞
S1 (t − τ )F1 (Z(τ ) +Y0 )d τ −
0 t
S2 (t − τ )F2 (Z(τ ) +Y0 )d τ
S2 (−τ )(F2 (Z(τ ) +Y (τ )) − F2 (Y (τ )))d τ = TPY(0) [Y ](t).
In particular, Y (0) = TPY(0) [Y ](0) and, therefore, by the definition of the operator TPY(0) , we obtain that PY (0) =PZ0 (0) + PY0 − =PY0 − =PY0 −
∞ 0
∞ 0
∞ 0
S2 (−τ )(F2 (Y˜ (τ )) − F2 (Y (τ )))d τ
S2 (−τ )(F2 (Y˜ (τ )) − F2 (Y (τ )))d τ S2 (−τ )(F2 (Y (τ ) + Z(τ )) − F2 (Y (τ )))d τ
since PZ0 (0) = 0. Therefore, Y˜ restricted to R− is a fixed point of TPY(0) and hence Y˜ (0) ∈ M . For t > 0, we can conclude similarly Y˜ (t) =Z(t) +Y (t) = R[Z](t) +Y (t) = − S(t)Y0 + 0
+
−∞ t
+ 0
0
−∞
S1 (t − τ )F1 (Y0 )d τ + S(t)PY0
S1 (t − τ )(F1 (Z(τ ) +Y0 ) − F1 (Y0 ))d τ
S1 (t − τ )(F1 (Z(τ ) +Y (τ )) − F1 (Y (τ )))d τ
2.2 Semilinear Case (Linear Dichotomy)
−
∞ 0
t
+ 0
131
S2 (t − τ )(F2 (Z(τ ) +Y (τ )) − F2 (Y (τ )))d τ
S2 (t − τ )(F2 (Z(τ ) +Y (τ )) − F2 (Y (τ )))d τ t
+ S(t)Y0 + S(t − τ )F(Y (τ ))d τ 0 0 =S(t) PY0 + S1 (−τ )F1 (Z(τ ) +Y0 )d τ −
∞ 0
t
+ 0
−∞
S (−τ )(F2 (Z(τ ) +Y (τ )) − F2 (Y (τ )))d τ 2
S(t − τ )F(Z(τ ) +Y (τ ))d τ = S(t)Y˜ (0) +
t 0
S(t − τ )F(Y˜ (τ ))d τ .
where we apply QY˜ (0) =QZ0 (0) + = − QY0 + 0
+ 0
=
−∞
−∞
0
−∞ 0
−∞
S1 (−τ )(F1 (Z(τ ) +Y0 ) − F1 (Y0 ))d τ + QY0
S1 (−τ )F1 (Y0 )d τ
S1 (−τ )(F1 (Z(τ ) +Y0 ) − F1 (Y0 ))d τ + QY0
S1 (−τ )F1 (Z(τ ) +Y0 )d τ .
We know that Y˜ is a fixed point of TPY˜0 , hence Y (0) = m(PY (0)), PY (0) ∈ M . Therefore, Y (t) = φ (t, Y (0)) ∈ M for t ≥ 0. Thus, to complete the proof of the tracking property in (2.2.18) and (2.2.19) we only need to establish appropriate estimates for Z(t). Since Z(t) = R[Z](t) = Z0 (t) + R[Z](t) − R[0](t),
(2.2.37)
from (2.2.33) and (2.2.36), we obtain the relation |Z|Z ≤ (1 − κ (μ ))−1 · |Z0 |Z ≤ (1 − κ (μ ))−1 · (R +CY0 X ) ,
(2.2.38)
which implies (2.2.18). Now, we prove (2.2.19). From (2.2.35), we have that for t ∈ R μt
∞
e P (R[Z](t) − R[0](t)) X2 ≤ M2 L2 e(μ −γ2 )(t−τ ) · eμτ Z(τ )X d τ t ∞
1/2 M2 L2 · |Z|Z . ≤ M2 L2 e2(μ −γ2 )(t−τ ) d τ · |Z|Z = t 2(μ − γ2 )
132
2 Master–Slave Synchronization via Invariant Manifolds
Thus, M2 L2 · |Z|Z . sup eμ t P (R[Z](t) − R[0](t)) X2 ≤ 2( μ − γ2 ) t∈R
(2.2.39)
Similarly, we have that eμ t Q (R[Z](t) − R[0](t)) X ≤ M1 L1
t
−∞
e−γ1 (t−τ ) eμτ Z(τ )X d τ ≤
M1 L1 · |Z|Z . 2(γ1 − μ )
(2.2.40)
Consequently, using relations (2.2.37), (2.2.39), (2.2.34) and(2.2.40) we obtain that for appropriate c1 > 0, c2 > 0 sup eμ t Z(t)X ≤ c1 + c2 |Z|Z + c3 Y0 X . t∈R
Thus, by (2.2.38), we have sup eμ t Z(t)X ≤ c4 + c5 Y0 X . t∈R
for appropriate constants c3 and c4 . This implies (2.2.19). The strict invariance of M follows from Proposition 2.2.10 given below. To prove the master–slave synchronization (2.2.21) we note that for Y (t) = (U(t),V (t)) we have that U(t) − m(V (t)) = Q[Y (t) − φ (t,Y ∗ )] − [m(PY (t)) − m(Pφ (t,Y ∗ ))], where Y ∗ is in M (see (2.2.19)). Thus, by the Lipschitz continuity of m and the positive invariance of M U(t) − m(V (t))X1 ≤ CY (t) − φ (t,Y ∗ )X ,
t > 0.
Hence, (2.2.21) follows from (2.2.19). This completes the proof of Theorem 2.2.6.
The Reduced System and Strict Invariance Assume the hypotheses of Theorem 2.2.6. Let m be given by (2.2.32). Consider the problem Vt + A2V = F2 (m(V ),V ), t > 0, in X2 , V (0) = V0 , and consider its mild solution on the interval [0, T ] as a function V (t) = V (t,V0 ) ∈ C([0, T ], X2 )
(2.2.41)
2.2 Semilinear Case (Linear Dichotomy)
133
such that V (t) = S2 (t)V0 +
t 0
S2 (t − τ )F2 (m(V (τ )),V (τ ))d τ
(2.2.42)
for all t ∈ [0, T ]. Proposition 2.2.10. Under the conditions of Theorem 2.2.6, the problem in (2.2.41) has a mild solution on any interval [0, T ] for any V0 ∈ X2 . This solution is unique and any mild solution V to problem (2.2.41) generates a mild solution to problem (2.2.1) with initial condition (m(V0 ),V0 ) by the formula Y (t) = (U(t),V (t)) = (m(V (t)),V (t)).
(2.2.43)
Moreover, in this case, the manifold M is strictly invariant with respect to the evolution operator φ generated by (2.2.1). Proof. The existence and uniqueness of a solution to (2.2.41) follows by the global Lipschitz continuity of m. Since S2 is a group, we can solve (2.2.42) backward in time and, hence, we can prove that M is strictly invariant. Observe now that Theorem 2.2.6 implies that for any mild solution Y to problem (2.2.1) with initial data Y0 ∈ X, there exists a mild solution V (t) to the reduced problem in (2.2.41) such that V (t) − QY (t)2X2 + m(V (t)) − PY (t)2X1 → 0
as t → ∞
exponentially fast (in the sense of (2.2.19) and (2.2.21)). Thus, under the conditions of Theorem 2.2.6, the limiting synchronized long-time behavior of solutions to (2.2.1) can be described completely by solutions to problem (2.2.41). Moreover, owing to relation (2.2.43), every limiting regime of the reduced system (2.2.41) is realized in the coupled system (2.2.1).
2.2.4 Coupled Parabolic–Hyperbolic System These systems arise in the study of wave phenomena, which are heat generating or temperature related (see, for example, Chueshov [38], Leung [110] and the references therein). Similar models can also be found in thermoelasticity (see the discussion in Sect. 2.4). We also refer the reader to Chap. 4 for stochastic perturbations of these models. Let O be a bounded domain in Rd , Γ := ∂ O a C1 -manifold. Let {ai j }di, j=1 and {bi j }di, j=1 be symmetric matrices of measurable functions such that c0 |ξ |2 ≤
d
∑
ai j (x)ξ j ξi ≤ c1 |ξ |2 ,
i, j=1
c0 |ξ |2 ≤
d
∑
i, j=1
bi j (x)ξ j ξi ≤ c1 |ξ |2 ,
ξ = (ξ1 , . . . , ξd ) ∈ Rd ,
134
2 Master–Slave Synchronization via Invariant Manifolds
for some positive constants c0 , c1 and x ∈ O. Let Γ = Γ0 ∪ Γ1 , where Γ0 and Γ1 are / (Γ0 = 0/ or Γ0 = Γ is allowed). (relatively) open subsets of Γ such that Γ0 ∩ Γ1 = 0. Let a0 and b0 be nonnegative parameters and aΓ0 a positive function in L∞ (Γ1 ). We consider the following coupled system consisting of the parabolic–hyperbolic problem ut −
d
∑
∂i [ai j (x)∂ j u] + a0 u = f1 (u, v, vt ),
i, j=1
u = 0 on Γ0 ,
d
∑
ni ai j ∂ j u + aΓ0 (x)u = 0 on Γ1
i, j=1
vtt −
d
∑
(2.2.44)
∂ j [bi j (x)∂ j v] + b0 v = f2 (u, v, vt ),
i, j=1
v = 0 on Γ , where n = (n1 , . . . , nd ) is the outer normal vector of Γ . We assume that the functions f1 : R3 → R
and
f2 : R3 → R
possess the properties: | f1 (w) − f1 (w∗ )| ≤ l1 |w − w∗ |R3 . and | f2 (w) − f2 (w∗ )| ≤ l2 |w − w∗ |R3 for all w, w∗ ∈ R3 . Let A˜ 1 be a positive self-adjoint operator on X1 = L2 (O) generated by the bilinear form (A˜ 1 u, u∗ ) =
d
∑
i, j=1 O
ai j ∂ j u∂i u∗ dx + a0
O
uu∗ dx +
Γ1
aΓ0 uu∗ dΓ .
This operator has a compact inverse and generates a strongly continuous semigroup −λ t S1 . We have that S1 (t)X1 ≤ e A˜ 1 for t ≥ 0, where λA˜ 1 := inf spec(A˜ 1 ). We note that λA˜ 1 > 0 provided that Γ0 = 0, / or a0 > 0, or aΓ0 > 0. Let us write the wave equation in (2.2.44) as a first-order system: d v0 v0 0 −id 0 , (2.2.45) + ˜ = B 0 v1 f2 (u, v0 , v1 ) dt v1
2.2 Semilinear Case (Linear Dichotomy)
135
where B˜ is a positive self-adjoint operator defined by ˜ =− Bv
d
∑
∂ j [bi j (x)∂ j v] + b0 v,
˜ := H 2 (O) ∩ H01 (O). v ∈ D(B)
i, j=1
We set λB˜ := inf spec B˜ > 0. We denote by 0 −id A˜ 2 = ˜ B 0 the generator of the unitary strongly continuous group S2 (t) generated by the linear part of (2.2.45) on X2 = H01 (O) × L2 (O). We equip the space X2 with the energy type norm (2.2.46) V X2 = (v, v1 )2X2 = |B˜ 1/2 v0 |2 + |v1 |2 dx, V = (v0 , v1 ). O
We have S2 (t)X2 ≤ e−γ2 t for t ∈ R− with γ2 = 0. We also define F1 (U,V )[x] = f1 (u(x), v(x), vt (x)),
U = u, V = (v, vt ),
and F2 (U,V )[x] = (0, f2 (u(x), v(x), vt (x))), which are Lipschitz continuous operators from X = X1 × X2 into Xi for i = 1, 2 respectively. One can see that Assumption 2.2.1 is valid for the case considered. Thus, we obtain that the mild solution of (2.2.44) generates a dynamical system. In addition, we have M1 = M2 = 1, γ1 = λA1 ≥ a0 , γ2 = 0 Indeed, we have v2L2 (O) ≤
1 v2H 1 (O) . 0 λB˜
and also for the Lipschitz constants of F1 , F2 : ! ! " " 1 1 , L2 = l2 max 1, . L1 = l1 max 1, λB˜ λB˜ Thus, by Theorem 2.2.6 under the condition
λA˜ 1 >
! " 2 1 l1 + l2 max 1, λB˜
136
2 Master–Slave Synchronization via Invariant Manifolds
the fourth equation of (2.2.44) synchronizes the first one in the master–slave sense. In contrast according Proposition 2.2.5, we have the drive-response synchronization in (2.2.44) under the condition " ! 1 . λA˜ 1 > l1 max 1, λB˜ Remark 2.2.11. We note that it is not important that O is a bounded domain and Dirichlet boundary conditions for v hold. The only facts that we use in the proof ˜ > 0, and that (ii) A˜ 1 generare that (i) B˜ is a self-adjoint operator with inf spec (B) ates an exponentially stable, strongly continuous semigroup. Thus, we can consider unbounded domains and equip the corresponding differential operation with other (self-adjoint) boundary conditions. We will use this observation in our subsequent applications. For further generalization of the model in (2.2.44) we refer the reader to Sect. 1.4.5.
2.2.5 Coupled PDE/ODE Systems This kind of coupled problem arises in biology. For instance, the well-known Hodgkin–Huxley system belongs to this class (see discussion in Sect. 1.4.4 and also Chueshov [36], Henry [92], Smoller [156] and the references therein). Let f1 : R1+m → R, f2 : R1+m → Rm be globally Lipschitz functions: | fi (w) − fi (w∗ )| ≤ li |w − w∗ |R1+m
for all w, w∗ ∈ R1+m .
In a bounded domain O ⊂ Rd we consider the following parabolic equation ut − Δ u = f1 (u, v), x ∈ O, t > 0, u|∂ O = 0,
(2.2.47)
coupled with the ordinary differential equation in Rm : vt = f2 (u, v), t > 0.
(2.2.48)
In (2.2.48) t → v(t) is a function with values in [L2 (O)]m , which satisfies ODE with respect to t (the variable x is present as a parameter). So, for the fixed u ∈ L2 (O) (and x ∈ O), we can solve (2.2.48) as an equation in [L2 (O)]m . The problem (2.2.47) can be embedded in our framework with the spaces X1 = L2 (O) and X2 = [L2 (O)]m and operators A˜ 1 = −Δ in X1 with the domain H 2 (O) ∩ H01 (O) and A˜ 2 = 0 in X2 . The semigroup S1 generated by A˜ 1 is the same as in the previous example and S2 ≡ id. It is clear that the dichotomy properties hold with γ1 = λA˜ 1 := inf spec(A˜ 1 ) > 0 and γ2 = 0. We also have that M1 = M2 = 1. Thus,
2.2 Semilinear Case (Linear Dichotomy)
137
under the condition
λA˜ 1 >
l1 +
2 l2
we observe the master–slave synchronization phenomenon. For the drive-response synchronization we need only the condition λA˜ 1 > l1 . We can also consider the case when the ODE component is the slave. For instance, it is easy to analyze conditions for synchronization in the system: vtt − Δ v = f2 (u, v) ut + au = f1 (u, v), t > 0, x ∈ O, t > 0, v|∂ O = 0, where a is positive parameter.
2.2.6 Two Coupled Hyperbolic Systems In a smooth bounded domain O ⊆ Rd we consider two coupled wave equations for scalar functions u and v: utt + ν ut −
d
∑
∂i [bi j (x)∂ j u] + b0 u = f1 (u, v, vt ), u = 0 on Γ
i, j=1
vtt −
d
∑
∂ j b¯ i j (x)∂ j v + b¯ 0 v = f2 (u, v, vt ), v = 0 on Γ ,
(2.2.49)
i, j=1
b0 , b¯ 0 ≥ 0. We emphasize that we can also include in the functions fi dependence on ut and obtain conditions for synchronization. However, in this case, owing to the structure of the norm (see (2.2.51) below) in the slave space the calculation is not so direct and the Lipschitz constants for the corresponding nonlinearity may depend on ν . We do not give these calculations here. In the same way as in Sect. 2.2.4 the linear part of the second equation generates a unitary strongly continuous group S2 on X2 = H01 (O) × L2 (O) with norm (2.2.46). Let us rewrite the first equation of (2.2.49) as d u 0 −id u 0 , (2.2.50) + ˜ = B1 ν ut f1 (u, v, vt ) dt ut where B˜ 1 is a positive self-adjoint operator defined by B˜ 1 v = −
d
∑
i, j=1
∂ j [bi j (x)∂ j v] + b0 v,
v ∈ D(B˜ 1 ) = H 2 (O) ∩ H01 (O).
138
2 Master–Slave Synchronization via Invariant Manifolds
The second equation can be written in a similar manner where the differential operator second order is denoted by B˜ 2 . Then, the linear part of equation (2.2.50) generates a strongly continuous semigroup S1 on the phase space 1/2 X1 = D(B˜ 1 ) × L2 (O),
1/2 U2X1 = B˜ 1 u0 2 + u1 + 2ε u0 2 ,
(2.2.51)
where U = (u0 , u1 ) and · is the norm of L2 (O). This linear part then gives the generator A˜ 1 of S1 . The parameter ε > 0 is chosen below. This choice of the norm is motivated by the proof of the following Lemma. Similarly, the linear part B¯ 2 of the second equation of (2.2.49) is a positive self-adjoint operator giving A˜ 2 , which is the generator of a C0 semigroup S2 . Lemma 2.2.12. Let B˜ 1 be a positive self-adjoint operator and λB˜1 = inf spec(B˜ 1 ). Assume that U(t) = (u(t), ut (t)) is a solution to utt + ν ut + B˜ 1 u = 0. % 3λB˜ If we choose ε := min ν4 , 8ν1 in definition (2.2.51) of the norm in X1 , then
U(t)2X1 ≤ e−ε t U(0))2X1
for every t > 0.
Proof. We use the same idea as Proposition IV.1.2 in Temam [161]. 1/2 Let v(t) = ut (t) + 2ε u(t) and Ψ (t) = B˜ 1 u(t)2 + v(t)2 . Then, d Ψ (t) =2(ut , B˜ 1 u) + 2(−B˜ 1 u − ν ut + 2ε ut , v) dt 1/2 = − 4ε B˜ 1 u2 − 2(ν − 2ε )(ut , v) 1/2 = − 2 2ε B˜ 1 u2 + (ν − 2ε )v2 − 2ε (ν − 2ε )(u, v) 4ε (ν − 2ε ) 1/2 1/2 B˜ 1 uv. ≤ − 2 2ε B˜ 1 u2 + (ν − 2ε )v2 + λB˜1 Since 0 < ε ≤ ν /4, we have that 4εν ˜ 1/2 d ν 1/2 Ψ (t) ≤ − 2 2ε B˜ 1 u2 + v2 + B1 uv. dt 2 λ˜ B1
Estimating
we obtain
4εν ˜ 1/2 2εν 2 1/2 B1 uv ≤ 2ε B˜ 1 u2 + v2 λ ˜ B1 λB˜1 . ν 4εν d 1/2 2 2− Ψ (t) ≤ − 2 ε B˜ 1 u + dt 4 λB˜1
/ v
2
.
(2.2.52)
2.2 Semilinear Case (Linear Dichotomy)
139
Then, by the assumptions
ν 4
2−
4εν λB˜1
≥
ν ε ≥ . 8 2
we can estimate the right-hand side of (2.2.52) by −εΨ . This implies the conclusion. We assume that fi : R3 → R, i = 1, 2, in (2.2.49) are globally Lipschitz functions: | fi (w) − fi (w∗ )| ≤ li |w − w∗ |R3
for all w, w∗ ∈ R3 .
Then, for the corresponding nonlinear operators Fi for the parameters of the linear semigroups, we obviously have that M1 , M2 = 1,
γ2 = 0,
Simple calculations show that ⎧ ⎫ ⎨ 1 ⎬ 1 L1 = l1 max 1, ,, ⎩ λ λ ⎭ B˜ 1
B˜ 2
γ1 = ε /2. ⎧ ⎨
⎫ ⎬
1 1 ,L2 = l2 max 1, . ⎩ λ λ ⎭ B˜ 1
B˜ 2
Indeed, we have for U1 = (u1 , u1t ), U2 = (u2 , u2t ) ∈ X1 and V1 = (v1 , v1t ), V2 = (v2 , v2t ) ∈ X2 F1 (U1 ,V1 ) − F1 (U2 ,V2 )2X1
=
O
≤l12 ≤l12
| f1 (u1 (x), v1 (x), v1t (x)) − f1 (u2 (x), v2 (x), v2t (x))|2 dx
O
O
|u1 (x) − u2 (x)|2 + |v1 (x) − v2 (x)|2 + |v1t (x) − v2t (x)|2 dx 1 ˜ 1/2 1 ˜ 1/2 |B (u1 (x) − u2 (x))|2 + |B (v1 (x) − v2 (x))|2 λB˜1 1 λB˜2 2
+ |v1t (x) − v2t (x)|2 dx 1 1 , )(U1 −U2 2X1 + V1 −V2 2X2 ) ≤l12 max(1, λB˜1 λB˜2 and similar for F2 . Thus, under the condition
⎧ ⎫ % ⎨ 2 ν 3λB˜1 1 1 ⎬ 1 , γ1 − γ2 ≥ min l1 + l2 max 1, ,> ⎩ 2 4 8ν λ˜ λ˜ ⎭ B1
B2
there exists an exponentially attracting invariant manifold. In particular, by Theorem 2.2.6 the second equation in (2.2.49) synchronizes the dynamics governed by
140
2 Master–Slave Synchronization via Invariant Manifolds
the first equation in (2.2.49). We mention that by Proposition 2.2.5 we also observe the drive-response synchronization in system (2.2.49). As an illustration we can consider a coupled sine-Gordon system of the form utt + ν ut − Δ u = −l sin(u − v) + g˜1 (x), vtt − Δ v = −l sin(v − u) + g˜2 (x), u|∂ Ω = 0, v|∂ Ω = 0 where g˜i ∈ L2 (O). Under some relations between ν > 0 and l ≥ 0 we can observe master–slave synchronization in this system. It is different from asymptotic antiphase synchronization, which was observed in Sect. 1.6.6.
2.2.7 Coupled Klein–Gordon–Schr¨odinger System The following coupled model arises in quantum physics (see, for example, Biler [15] and the references therein): utt + ν ut − Δ u + m2 u = f1 (u, v) in Rd , ivt + Δ v = f2 (u, v) in Rd ,
(2.2.53)
where ν , m > 0. Here, u is real and v is a complex function. In contrast to previous examples, here we concentrate on the case when O = Rd . In the case when O is a domain in Rd we need to impose some (self-adjoint) boundary conditions. We assume that the functions f1 : R × C → R,
and
f2 : R × C → C
are globally Lipschitz, i.e., | f1 (w) − f1 (w∗ )| ≤ l1 |w − w∗ |R×C ,
| f2 (w) − f2 (w∗ )| ≤ l2 |w − w∗ |R×C , w, w∗ ∈ R × C.
To apply Theorem 2.2.6 we rewrite (2.2.53) as (2.2.1) with U = (u, ut ) and V = v. The corresponding phase spaces X1 = H 1 (Rd ) × L2 (Rd ),
X2 = L2C (Rd ),
where L2C (Rd ) is the space of square integrable complex functions. ˜ = We consider in L2 (Rd ) the operator B˜ = −Δ + m2 with the domain D(B) 2 d ˜ H (R ). It is clear from the Fourier analysis that B is a positive self-adjoint operator ˜ = m2 . We equip X1 with the norm given in (2.2.51) with this with λB˜ := inf spec(B) ˜ operator B. Thus, by Lemma 2.2.12, the linear part of the first equation of (2.2.53) generates strongly continuous semigroup S1 (t) for which we have M1 = 1 and γ1 =
2.3 Quasilinear Case (Nonlinear Dichotomy)
141
# $ 2 min ν8 , 3m 16ν . Since the linear part of the second equation of (2.2.53) generates the unitary group (this again follows from the Fourier analysis), we also have that M2 = 1 and γ2 = 0. A calculation as in the previous examples gives us that % % 1 1 L1 = l1 max 1, , L2 = l2 max 1, . m m Thus, under the condition % % 2 1 ν 3m2 1 min , l1 + l2 max 1, > 2 4 8ν m the second equation in (2.2.53) synchronizes the dynamics in the first equation of (2.2.53). For the drive-response synchronization we need only the condition % % ν 3m2 1 1 min , > l1 max 1, . 2 4 8ν m
2.3 Quasilinear Case (Nonlinear Dichotomy) The main goal in this section is to show that the method of invariant manifolds can be applied to coupled systems with a nonlinear main part. For this we develop some ideas related to the Hadamard graph transform method (see, for example, Constantin et al. [62] and also Bates/Jones [10], Romanov [142]). We assume some kind of generalized dichotomy between (nonlinear) equations (in contrast to the standard dichotomy hypotheses concerning the corresponding linear parts, see, for example, (2.2.2) and (2.2.3)). The generalized dichotomy allows us to avoid the global Lipschitz assumption concerning nonlinearity in the model.
2.3.1 Statement of Main Result Let X1 and X2 be (infinite dimensional) Banach spaces. The main object is now the following system of differential equations Ut = F1 (U,V ), t > 0, in X1 , Vt = F2 (U,V ), t > 0, in X2 , where F1 and F2 are nonlinear mappings, F1 : X1 × X2 → X1 ,
F2 : X1 × X2 → X2 .
(2.3.1)
142
2 Master–Slave Synchronization via Invariant Manifolds
Assumption 2.3.1. We impose the following hypotheses: 1. For any pair of initial data (U(0),V (0)) ∈ X1 × X2 the problem in (2.3.1) possesses a unique solution (U(t),V (t)) from the class C(R+ , X1 × X2 ). 2. Problem (2.3.1) has a unique stationary solution. For the sake of simplicity we assume that this solution is zero, i.e., F1 (0, 0) = 0 and F2 (0, 0) = 0. 3. For every interval [a, b], any datum Vb ∈ X2 and any function U(t) from the class C([a, b], X1 ) there exists a solution V (t) ∈ C([a, b], X2 ) to the problem Vt = F2 (U,V ), t ∈ (a, b), in X2 , V t=b = Vb . (2.3.2) Moreover, there exist constants ν ≥ 0 and L2 > 0 such that V1 (t) −V2 (t)X2 ≤eν (b−t) V1 (b) −V2 (b)X2 b
+ L2
t
eν (τ −t) U1 (τ ) −U2 (τ )X1 d τ ,
(2.3.3)
for every t ∈ [a, b] and Ui (t) ∈ C([a, b], X1 ), where Vi (t) is a solution to (2.3.2) with U(t) = Ui (t), i = 1, 2. 4. For every interval [a, b], any datum Ua ∈ X1 and any function V (t) from the class C([a, b], X2 ) there exists a solution U(t) ∈ C([a, b], X1 ) to the problem Ut = F1 (U,V ), t ∈ (a, b), in X1 , U t=a = Ua . (2.3.4) Moreover, there exist constants β > 0 and L1 > 0 such that U1 (t) −U2 (t)X1 ≤e−β (t−a) U1 (a) −U2 (a)X1 t
+ L1
a
e−β (t−τ ) V1 (τ ) −V2 (τ )X2 d τ ,
(2.3.5)
for every t ∈ [a, b] and Vi (t) ∈ C([a, b], X1 ), where Ui (t) is a solution to (2.3.4) with V (t) = Vi (t), i = 1, 2. Remark 2.3.2. 1. We prefer not to introduce an exact definition of a solution to the problem in (2.3.1) (and also (2.3.2) and (2.3.4)) because this is not the main objective here. We only note that solutions can be generalized, weak or mild (see, for example, Henry [92], Pazy [127], Showalter [153]) depending on the problem we deal with. In our example, we describe a solution in a more precise way. At this level of generality we need only the fact that the problems mentioned generate dynamical systems. 2. In the case when L1 = L2 = 0 relations (2.3.3) and (2.3.5) turn into the standard dichotomy estimates (see Remark 2.2.2). The estimates (2.3.3) and (2.3.5) with nonzero L1 and L2 may arise in the case of nonlinear perturbations. Therefore, it is natural to call (2.3.3) and (2.3.5) generalized (nonlinear) dichotomy estimates. We also note that in the case of the standard linear dichotomy and globally Lip-
2.3 Quasilinear Case (Nonlinear Dichotomy)
143
schitz nonlinearities (see Sect. 2.2) the constants L1 and L2 in (2.3.3) and (2.3.5) are the corresponding Lipschitz constants of the nonlinearities. 3. It follows directly from (2.3.5) that under Assumption 2.3.1(1,4) we observe in the system in (2.3.1) the exponentially fast drive-response synchronization with V as a synchronizing coordinate (see Definition 2.1.2). In fact, via relation (2.3.5) Assumption 2.3.1 contains the hypothesis on a drive-response synchronization in some strong form. Our main result concerning problem (2.3.1) is the following theorem. Theorem 2.3.3. Assume that Assumption 2.3.1 is in force and that the following gap condition √ β − ν > 2 L1 L2 (2.3.6) is satisfied. Then there exists a unique mapping m : X2 → X1 possessing the properties 1. m(0) = 0, m(η1 ) − m(η2 )X1 ≤ ξ η1 − η2 X2 , where ξ = L1 L2−1 . 2. The manifold M = {(m(V ),V ) : V ∈ X2 } ⊂ X1 × X2
(2.3.7)
is forward invariant, i.e., for any initial data (U(0),V (0)) from M the solution (U(t),V (t)) belongs to M for each t ≥ 0. 3. The manifold M is exponentially attracting in the sense that U(t) − m(V (t))X1 ≤ e−λ t U(0) − m(V (0))X1 ,
t ≥ 0.
(2.3.8)
for any solution (U(t),V (t)) to problem (2.3.1), where λ is any number possessing the property √ √ ν + L1 L2 < λ < β − L1 L2 . (2.3.9) The relation in (2.3.8) means that under the conditions of Theorem 2.3.3 the first equation (2.3.1) is asymptotically synchronized with the second one in (2.3.1) with an exponential rate. The limit regimes of the synchronized coupled system can be described by the inertial form Vt = F2 (m(V ),V ), t > 0, in X2 , V t=0 = V0 ∈ X2 . We will see that m is the fixed point γ ∗ of a graph transform mapping.
144
2 Master–Slave Synchronization via Invariant Manifolds
2.3.2 Hadamard Graph Transform Method Our proof of Theorem 2.3.3 is based on some ideas developed in the framework of the Hadamard graph transform method (see, for example, Constantin et al. [62]) in the form closely related to Bates/Jones [10] and Romanov [142]. The main idea of the Hadamard method is based on the following observation. Assume that there exists an invariant manifold M of the form (2.3.7) having the attracting property (2.3.8). Let φ (t, ·) : X = X1 × X2 → X1 × X2 = X be the semi group for (2.3.1) and Q and P be the projectors on X1 and X2 respectively (see (2.2.6) in Sect. 2.2). In this case, (2.3.8) implies that Qφ (t, PVˆ ) − m(Pφ (t, PVˆ )) → 0 as t → ∞, where Vˆ ∈ X2 and PVˆ = (0, Vˆ ) ∈ X1 × X2 . As was explained in Sect. 2.2, we can consider Vˆ and PVˆ as the same element. If we assume that the mapping gt : X2 → X2 given by Vˆ → gt (Vˆ ) = Pφ (t, PVˆ ), Vˆ ∈ X1 is invertible, then, at least formally, we can state that Qφ (t, Pgt−1 (V )) − m(V ) → 0 as t → ∞, for every V ∈ X2 . Thus, we arrive at the following formal formula m(V ) = lim Qφ (t, Pgt−1 (V )). t→+∞
This is the main point in the Hadamard method. The main task is to show that the mapping gt is invertible and that gt−1 possesses appropriate properties. To execute this task we use the cone invariance property, which follows from the generalized dichotomy estimates (2.3.3) and (2.3.5). Let us also give a heuristical interpretation of the system of equations (2.3.2) (2.3.4). The idea is to transform a domain graph in X1 × X2 into an image graph in X1 × X2 under the dynamical system φ . In particular, we are looking for a graph that is invariant under the dynamical system φ which gives us the invariant manifold. Transforming the domain graph, we need technical conditions that ensure that the image set is a graph. Roughly speaking, by (2.3.2) we determine the X2 -component Va of the domain, which will later be transformed to an element of the image graph with X2 component Vb . Then, consider the solution of (2.3.4) at time b, which is the element (Ub ,Vb ) when the element (Ua ,Va ) of the domain graph is mapped by the dynamical system φ :
φ (b − a, (Ua ,Va )) = (Ub ,Vb ).
2.3 Quasilinear Case (Nonlinear Dichotomy)
145
Cone Property The main consequence of the generalized dichotomy estimates (2.3.3) and (2.3.5) is the following assertion, which is related to the so-called cone invariance property (see, for example, Bates/Jones [10], Constantin et al. [62]) Romanov [142], Temam [161]). This property is an important tool in many constructions of inertial manifolds (see, for example, Constantin et al. [62]) Romanov [142] and the references therein). Proposition 2.3.4. Let ξ > 0. Assume that the following gap condition
β − ν ≥ ξ −1 L1 + ξ L2
(2.3.10)
holds. Then, for any two solutions (U1 (t),V1 (t)) and (U2 (t),V2 (t)) to problem (2.3.1), we have the relation eλ t Vξ (t) ≤ eλ s Vξ (s),
for all t ≥ s,
(2.3.11)
where λ is any number possessing the property
ν + ξ −1 L1 ≤ λ ≤ β − ξ L2
(2.3.12)
and Vξ (t) = U1 (t) −U2 (t)X1 − ξ V1 (t) −V2 (t)X2 . Proof. Using the notations ψ1 (t) = U1 (t) − U2 (t)X1 and ψ2 (t) = V1 (t) − V2 (t)X2 we can rewrite relations (2.3.5) and (2.3.3) in the form
ψ1 (t) ≤ e−β (t−s) ψ1 (s) + L1
t s
e−β (t−τ ) ψ2 (τ )d τ ,
t ≥ s,
e−ν (t−τ ) ψ1 (τ )d τ ,
t ≥ s.
and
ψ2 (t) ≥ e−ν (t−s) ψ2 (s) − L2
t s
Since Vξ (t) = ψ1 (t) − ξ ψ2 (t), one can see that eλ t Vξ (t) − eλ s Vξ (s) ≤ eλ t e−β (t−s) − eλ s ψ1 (s) − ξ eλ t e−ν (t−s) − eλ s ψ2 (s) +L1 eλ t
t s
e−β (t−τ ) ψ2 (τ )d τ + ξ L2 eλ t
t s
e−ν (t−τ ) ψ1 (τ )d τ .
146
2 Master–Slave Synchronization via Invariant Manifolds
Let t∗ ∈ R, t > s, and t, s → t∗ . Then, after obvious calculations, we obtain that 1 λt e Vξ (t) − eλ s Vξ (s) D+ eλ t∗ Vξ (t∗ ) := lim sup t,s→t∗ t − s t>s
≤ [λ − β + ξ L2 ] eλ t∗ ψ1 (t∗ ) + [L1 − ξ (λ − ν )] eλ t∗ ψ2 (t∗ ). Since λ possesses property (2.3.12) and ψ1 (t∗ ), ψ2 (t∗ ) ≥ 0, we can see that D+ eλ t∗ Vξ (t∗ ) ≤ 0 for every t∗ ∈ R. By the contradiction argument, this implies that eλ t Vξ (t) is a non-increasing function and thus (2.3.11) holds. Below, we choose ξ = L1 L2−1 . In this case, the gap condition in (2.3.10) turns into relation (2.3.6).
Construction of Invariant Manifold Let Lξ := Lip0ξ (X2 , X1 ) be the class of mappings (generating graphs) γ : X2 → X1 possessing the properties
γ (0) = 0,
γ (η1 ) − γ (η2 )X1 ≤ ξ η1 − η2 X2 .
We equip this class with the norm |γ |L =
sup
η ∈X2 ,η =0
γ (η )X1 , η X2
which makes Lξ a complete metric space. For every γ ∈ Lξ on an interval [a, b], we consider the problem Ut = F1 (U,V ), t ∈ (a, b), U(a) = γ (V (a)), Vt = F2 (U,V ), t ∈ (a, b), V (b) = η ∈ X2 .
(2.3.13)
More precisely, for fixed γ ∈ Lξ and η ∈ X2 , we are looking for a pair (U,V ) from the space C([a, b], X1 × X2 ) satisfying (2.3.13). Proposition 2.3.5. Assume that $ # q(ξ , b − a) := (b − a) · max (1 + ξ )L2 eν (b−a) , L1 < 1.
(2.3.14)
Then, for every fixed γ ∈ Lξ and η ∈ X2 problem (2.3.13) has a unique solution. Proof. We apply a fixed point argument in the space Y = C([a, b], X1 × X2 ). For this, we define the mapping T in Y by the formula T (U,V ) = (U ∗ ,V ∗ ), where
2.3 Quasilinear Case (Nonlinear Dichotomy)
147
(U ∗ ,V ∗ ) ∈ C([a, b], X1 × X2 ) solves the problem ⎧ ∗ ⎨ Ut = F1 (U ∗ ,V ), t ∈ (a, b), U ∗ (a) = γ (V ∗ (a)), ⎩
Vt∗ = F2 (U,V ∗ ), t ∈ (a, b), V (b) = η ∈ X2 .
(2.3.15)
To obtain (U ∗ ,V ∗ ), we first solve the second equation in (2.3.15) finding V ∗ (t). Then we construct the initial data for the first equation. It is obvious from Assumption 2.3.1(3,4) that T is a mapping in C([a, b], X1 × X2 ). Now we prove that this is contraction. Let T (Ui ,Vi ) = (Ui∗ ,Vi∗ ) =: Wi∗ , i = 1, 2. It follows from (2.3.5) that max U1∗ (t) −U2∗ (t)X1
t∈[a,b]
≤ γ (V1∗ (a)) − γ (V2∗ (a))X1 + L1 (b − a) max V1 (t) −V2 (t)X2 t∈[a,b]
≤ ξ V1∗ (a) −V2∗ (a)X2 + L1 (b − a) max V1 (t) −V2 (t)X2 . t∈[a,b]
By (2.3.3) we have that max V1∗ (t) −V2∗ (t)X2 ≤ L2 (b − a)eν (b−a) max U1 (t) −U2 (t)X1 .
t∈[a,b]
t∈[a,b]
If in the space Y = C([a, b], X1 × X2 ) we define a norm by the formula |W |Y = max U(t)X1 + max V (t)X2 , t∈[a,b]
t∈[a,b]
W (t) = (U(t),V (t)) ∈ Y,
then the above inequalities above imply that |W1∗ −W2∗ |Y ≤ q(ξ , b − a)|W1 −W2 |Y , where q(ξ , b − a) is given by (2.3.14) and Wi = (Ui ,Vi ). Thus, T is a contraction. This completes the proof of Proposition 2.3.5. Let (2.3.14) and (2.3.10) be in force. We define the mapping Ka,b on Lξ by the formula [Ka,b γ ](η ) = U(b) =: μ (η ),
γ ∈ Lξ ,
where (U,V ) is the solution to problem (2.3.13). Proposition 2.3.6. Let (2.3.10) and (2.3.14) be in force. The mapping Ka,b possesses the properties • Ka,b Lξ ⊂ Lξ ; • If ξ = L1 L2−1 , then Ka,b is a contraction on Lξ ;
148
2 Master–Slave Synchronization via Invariant Manifolds
• In the latter case, a unique fixed point γ ∗ does not depend on a and b and U(a) = γ ∗ (V (a)) implies U(t) = γ ∗ (V (t)) ∀t ≥ a for every solution (U(t),V (t)) to problem (2.3.1) on [a, +∞). Proof. 1. If η = 0 and γ (0) = 0, then (U(t),V (t)) ≡ (0, 0) solves (2.3.13). By the uniqueness statement in Proposition 2.3.5 we have that γ ∗ (0) = 0. Let us prove that γ ∗ is Lipschitz with the same constant as γ . It follows from Proposition 2.3.4 applied to a couple solution (Ui ,Vi ) to problem (2.3.13) with η = ηi , i = 1, 2 that eλ b Vξ (b) ≤ eλ a Vξ (a), where in our case Vξ (b) = U1 (b) −U2 (b)X1 − ξ η1 − η2 X2 , and Vξ (a) = γ (V1 (a)) − γ (V2 (a))X1 − ξ V1 (a) −V2 (a)X2 . As γ ∈ Lξ , we have that Vξ (a) ≤ 0 and hence Vξ (b) ≤ 0. This implies that γ ∗ is Lipschitz with the same constant ξ . Thus, γ ∗ ∈ Lξ . 2. Now we prove that Ka,b is a contraction on Lξ for ξ = L1 L2−1 . Let (Ui ,Vi ) be solutions to problem (2.3.13) with γi ∈ Lξ , i = 1, 2, and with the same η . Using Proposition 2.3.4 we have that eλ b [Ka,b γ1 ](η ) − [Ka,b γ2 ](η )X1 = eλ b U1 (b) −U2 (b)X1 ≤ eλ a (γ1 (V1 (a)) − γ2 (V2 (a))X1 − ξ V1 (a) −V2 (a)X2 ) ≤ eλ a γ1 (V1 (a)) − γ2 (V1 (a))X1 + Σ , where
Σ := eλ a (γ2 (V1 (a)) − γ2 (V2 (a))X1 − ξ V1 (a) −V2 (a)X2 ) ≤ 0 because γ2 ∈ Lξ . Thus we obtain that [Ka,b γ1 ](η ) − [Ka,b γ2 ](η )X1 ≤ e−λ (b−a) |γ1 − γ2 |L V1 (a)X2 .
(2.3.16)
Applying estimate (2.3.3) to the pair of solutions (U1 (t),V1 (t)) and (0, 0) yields V1 (t)X2 ≤ eν (b−t) η X2 + L2
b t
eν (τ −t) U1 (τ )X1 d τ ,
t ∈ [a, b].
On the other hand, Proposition 2.3.4 applied to the same couple yields
2.3 Quasilinear Case (Nonlinear Dichotomy)
149
eλ τ (U1 (τ )X1 − ξ V1 (τ )X2 ) ≤ eλ a (γ (V1 (a))X1 − ξ V1 (a)X2 ) ≤ 0. Thus, U1 (τ )X1 ≤ ξ V1 (τ )X2 and hence ν (b−t)
V1 (t)X2 ≤ e
η X2 + ξ L2
b t
eν (τ −t) V1 (τ )X2 d τ ,
t ∈ [a, b].
By Gronwall’s lemma we obtain that V1 (t)X2 ≤ e(ν +ξ L2 )(b−t) η X2 ,
t ∈ [a, b].
Thus, from (2.3.16) we get that [Ka,b γ1 ](η ) − [Ka,b γ2 ](η )X1 ≤ e−(λ −ν −ξ L2 )(b−a) |γ1 − γ2 |L η X2 . We recall that ξ = L1 L2−1 . Therefore, by (2.3.9) Ka,b is a contraction on Lξ . 3. We already know that for any interval [a, b] satisfying (2.3.14) there exist unique ∗ ∈ L such that γa,b ξ ∗ ∗ Ut =F1 (U,V ), t ∈ (a, b), U(a) = γa,b (V (a)), U(b) = γa,b (V (b)),
Vt =F2 (U,V ), t ∈ (a, b), V (b) = η ∈ X2 .
(2.3.17)
for some solution (U(t),V (t)). Since problem (2.3.1) is autonomous, owing to the ∗ depends on the difference uniqueness stated in Proposition 2.3.5, we can see that γa,b ∗ ∗ b − a only, i.e., γa,b = γ0,b−a . In particular,
γ ∗a+b ,b = γ0,∗ b−a = γa,∗ a+b . 2
2
2
Therefore, using (2.3.17) written for intervals [(a+b)/2, b] and [a, (a+b)/2] we can ∗ := γ ∗ construct (by gluing) a solution to (2.3.17) on the interval [a, b] with γa,b b−a . Thus, by the uniqueness of a fixed point stated in Proposition 2.3.6 we obtain
0, 2
∗ γ ∗a+b ,b = γa,∗ a+b = γa,b . 2
2
Continuing this division procedure we can conclude that there exists γ ∗ ∈ Lξ such that the property U(a) = γ ∗ (V (a)) implies that U(ti,k ) = γ ∗ (V (ti,k )) for all ti,k = a +
i (b − a), 2k
where i = 0, . . . , 2k , k = 1, 2, . . .. Since both (U(t),V (t)) and γ ∗ are continuous, we can conclude that U(a) = γ ∗ (V (a)) ⇒ U(t) = γ ∗ (V (t)) ∀t ∈ [a, b].
150
2 Master–Slave Synchronization via Invariant Manifolds
Now, we can easily extend this relation from the interval [a, b] to the semi-axis [a, +∞). In particular, Proposition 2.3.6 implies that there exists m = γ ∗ ∈ Lξ with ξ = L1 L2−1 such that the manifold (2.3.7) is invariant. Remark 2.3.7. We note for further use that the argument given in the proof of the last part of Proposition 2.3.6 shows that the problem in (2.3.17) has a unique solution for every interval [a, b].
Exponential Attraction Now, we prove the attraction property in (2.3.8). Let (U,V ) be a solution to the problem in (2.3.1) on [0, +∞). By Remark 2.3.7, for every t0 > 0 there exists a V ) on [0,t0 ] such that solution (U, V (t0 ) = V (t0 ),
= m(V (t)), t ∈ [0,t0 ]. U(t)
V ) yields Applying Proposition 2.3.4 to the couple (U,V ) and (U, eλ t0 Vξ (t0 ) ≤ Vξ (0), 0 )X and whereas in our case Vξ (t0 ) = U(t0 ) − U(t 1 Vξ (0) = U(0) − m(V (0))X1 − ξ V (0) − V (0)X2 ≤ U(0)) − m(V (0))X1 + m(V (0)) − m(V (0))X1 − ξ V (0) − V (0)X2 . Since m ∈ Lξ , we have that Vξ (0) ≤ U(0) − m(V (0))X1 . Therefore, we obtain that 0 )X ≤ e−λ t0 U(0) − m(V (0))X U(t0 ) − U(t 1 1 for every t0 > 0. This implies (2.3.8). The proof of Theorem 2.3.3 is complete.
2.3.3 Application: Coupled Parabolic–Hyperbolic System Revised In principle, Theorem 2.3.3 can be applied for all PDE models considered in Sect. 2.2. We do not give details of this analysis and consider only the situation when the hypotheses of Theorem 2.2.6 are not assumed. More precisely, we consider only an application of Theorem 2.3.3 to a model with a nonlinear main part. Let O be a bounded domain in Rd , Γ = ∂ O a C1 -manifold, and 2 ≤ p < ∞. Suppose that for each integer j = 0, 1, . . . d, we are given a continuous function
2.3 Quasilinear Case (Nonlinear Dichotomy)
151
a j : R → R such that [a j (ξ1 ) − a j (ξ2 )] (ξ1 − ξ2 ) ≥ ρ |ξ1 − ξ2 |2 ,
ξ1 , ξ2 ∈ R,
(2.3.18)
where ρ > 0, and c0 |ξ | p − k ≤ a j (ξ )ξ ≤ c1 |ξ | p + k,
ξ ∈ R,
for some positive constants c0 , c1 and k. Let Γ0 be a measurable subset of Γ (Γ0 = 0/ or Γ0 = Γ are allowed) and Γ1 = Γ \ Γ0 . Let β0 be a positive parameter. We consider the following coupled system consisting of the parabolic problem d
ut − ∑ ∂ j [a j (∂ j u)] + β0 u = f1 (u, w, wt ), j=1
u = 0 on Γ0 ,
(2.3.19)
d
∑ [a j (∂ j u)] n j + a0 (u) = 0 on Γ1 ,
j=1
where n = (n1 , . . . , nd ) is the outer normal vector, and the hyperbolic one ˜ = κ f2 (u, w, wt ), wtt + Bw
(2.3.20)
where w = (w1 , . . . , wm ) ∈ Rm is a vector function (m ≥ 1) and B˜ is a uniformly elliptic operator of the form d
B˜ = −
∑
∂ j (bi j (x)∂ j ) + b0 (x)
(2.3.21)
i, j=1
subjected to the Dirichlet boundary conditions with L∞ -coefficients bi j = b ji and b0 . ˜ > 0. For some reasons, which will be clear later, We assume that λB˜ := inf spec (B) we also put in (2.3.19) the parameter κ > 0. The functions f1 : R2m+1 → R and
f2 : R2m+1 → Rm
are zero at zero, globally Lipschitz functions and possess the properties: [ f1 (u, ζ ) − f1 (u∗ , ζ ∗ )](u − u∗ ) ≤ l01 |u − u∗ | + l11 ζ − ζ ∗ R2m |u − u∗ |
(2.3.22)
for all u, u∗ ∈ R and ζ , ζ ∗ ∈ R2m ; and f2 (u, ζ ) − f2 (u∗ , ζ ∗ )Rm ≤ l02 |u − u∗ | + l12 ζ − ζ ∗ R2m for all u, u∗ ∈ R and ζ , ζ ∗ ∈ R2m .
(2.3.23)
152
2 Master–Slave Synchronization via Invariant Manifolds
We denote V = (w, wt )T and rewrite the (master) equation in (2.3.20) as a firstorder equation of the form 0 −id 0 V =κ Vt + ˜ (2.3.24) B 0 f2 (U,V )
m in the space X2 = H01 (O) × [L2 (O)]m . The slave equation in (2.3.19) for U = u can be considered in X1 = L2 (O) and has the form Ut + A˜ 1 (U) = F1 (U,V ), where A˜ 1 is the monotone operator defined by the Green formula (A˜ 1 (u), u∗ ) =
d
∑
j=1 O
[a j (∂ j u)] ∂ j u∗ dx + β0
O
uu∗ dx +
Γ1
a0 (u)u∗ dΓ
by the boundary condition in (2.3.19), where u∗ satisfies the boundary conditions. The nonlinear mapping F1 is the Nemytskii operator corresponding to the function f1 (u, w, wt ). Thus, the coupled system (2.3.19) can be written as an equation in X1 × X2 with a maximal monotone operator perturbed by a globally Lipschitz term and hence we can use the results from Showalter [153, Chap.III, IV] to check Assumption 2.3.1(1). Here, we deal with so-called generalized solutions (see Showalter [153, p. 183] for the definition), which can be approximate by functions satisfying (2.3.19) for almost all x ∈ O and t > 0. The latter fact is important for the proof of the generalized dichotomy estimates in the case considered. Obviously, Assumption 2.3.1(2) is also in force in our case.
m To establish Assumption 2.3.1(3) in the space X2 = H01 (O) × [L2 (O)]m we consider the norm |B˜ 1/2 w0 |2 + |w1 |2 dx, V = (w0 , w1 )T . V 2X2 = O
We first prove the following assertion. Lemma 2.3.8. Let u1 , u2 ∈ C([a, b], X1 ) be fixed and Vi (t) = (wi (t), wti (t)) be a solution to (2.3.24) with U = u = ui , i = 1, 2. Then, V1 (t) −V2 (t)X2 ≤eν (b−t) V1 (b) −V2 (b)X2 b
+ L2
t
eν (τ −t) U1 (τ ) −U2 (τ )L2 (O) d τ
for every t ∈ [a, b] with
ν=
κ l12 max
- % 1, 1/ λB˜ ,
L2 = κ l02 ,
(2.3.25)
2.3 Quasilinear Case (Nonlinear Dichotomy)
153
where B˜ is a self-adjoint operator in [L2 (O)]m given by (2.3.21) with homogeneous Dirichlet boundary conditions. Proof. The difference w = w1 − w2 satisfies the equation ˜ = κ [ f2 (u1 , w1 , wt1 ) − f2 (u2 , w2 , wt2 )]. wtt + Bw Therefore, using (2.3.23) and the standard energy argument, we obtain that
1 d V1 −V2 2X2 ≥ − κ l02 U1 −U2 L2 (O) + κ l12 V1 −V2 X2 wt L2 (O) , 2 dt −1/2
where we have also used the obvious relation wL2 (O) ≤ λB˜ implies that
B˜ 1/2 wL2 (O) . This
d V1 −V2 X2 + ν V1 −V2 X2 ≥ −L2 U1 −U2 L2 (O) dt for almost all t ∈ [a, b]. Multiplying by eν t after integration over the interval [t, b] we get (2.3.25) d d νt (e V1 (t) −V2 (t)X2 ) = eν t V1 (t) −V2 (t)X2 + ν eν t V1 (t) −V2 (t)X2 dt dt ≥ −L2 U1 (t) −U2 (t)L2 (O) eν t and therefore eν b V1 (b) −V2 (b)X2 − eν t V1 (t) −V2 (t)X2 ≥ −L2
b t
U1 (τ ) −U2 (τ )L2 (O) eντ d τ .
Remark 2.3.9. As in the case of linear dichotomy (see Remark 2.2.11), we note that it is not important that O is a bounded domain and that Dirichlet boundary conditions for w hold in Lemma 2.3.8. The only facts that we use in the proof are that ˜ > 0. Thus, we can consider (i) B˜ is a self-adjoint operator and that (ii) inf spec (B) unbounded domains and equip the differential operation in (2.3.21) with other (selfadjoint) boundary conditions. Now, we establish Assumption 2.3.1(4). Let Vi = (wi0 , wi1 ) ∈ C([a, b], X2 ) be given and Ui be the corresponding solution to (2.3.19) , i = 1, 2. Let Z = U1 − U2 . Using (2.3.22) from monotonicity in (2.3.18) we have that - % 1 d Z2L2 (O) + β0 Z2L2 (O) ≤ l01 ZL2 (O) + l11 max 1, 1/ λB˜ V1 −V2 X2 ZL2 (O) 2 dt
154
2 Master–Slave Synchronization via Invariant Manifolds
This implies for U = 0 - % d 1 1 ZL2 (O) + (β0 − l0 )ZL2 (O) ≤ l1 max 1, 1/ λB˜ V1 −V2 X2 dt for almost all t ∈ [a, b], where λB˜ is the same as in Lemma 2.3.8. Therefore, (2.3.5) holds with - % β = β0 − l01 , L1 = l11 max 1, 1/ λB˜ . This establishes Assumption 2.3.1(4). Lemma 2.3.8 gives us 2.3.1(3). Thus, we can apply Theorem 2.3.3 to the coupled problem (2.3.19) under the condition - % - % 1/2 1 2 2 1 β0 − l0 − κ l1 max 1, 1/ λB˜ > 2 κ l0 l1 max 1, 1/ λB˜ . In particular, if β0 − l01 > 0, then there exists κ0 > 0 such that for every 0 ≤ κ ≤ κ0 there exist exponentially attracting manifold of the form (2.3.7) and thus system (2.3.20) synchronizes (2.3.19). As was already mentioned, coupled models like (2.3.19) arise in the study of wave phenomena, which are heat generating or temperature related. In contrast to the results known from the previous section, our main achievement is that in our approach we can treat the case of the nonlinear elliptic part and nonlinear boundary conditions in the slave problem in (2.3.19). We can also treat the case of unbounded domains and other boundary conditions. Since the assumption in (2.3.22) is onesided, we can also add to the first line of (2.2.44) a nonlinear term f0 (u), which is monotone but not globally Lipschitz.
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity Our main motivating example in this section is the following PDE model for thermoelasticity (see, for example, Jiang/Racke [96]):
θt − νΔ θ = −δ · div wt + G(θ , ∇θ ), t > 0, x ∈ O, (2.4.1) wtt + γ wt − μΔ w − (μ + λ )∇ div w = −κ ∇θ + F(w, ∇w), t > 0, x ∈ O. Here, O is a bounded C1 -smooth domain in Rd , d = 1, 2, 3, w = w(x,t) ∈ Rd is the displacement vector, θ = θ (x,t) is the temperature, and μ , λ , κ , ν , δ are positive constants, where μ and λ are Lam´e modulo. The parameter γ ≥ 0 describes resistance forces. The functions F and G are responsible for the nonlinearity of the medium. We also need to equip equations (2.4.1) with boundary conditions. For
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
155
example, we can consider the Dirichlet- type boundary conditions w = 0, θ = 0 for t > 0, x ∈ ∂ O. The system in (2.4.1) describes thermoelastic phenomena in a continuum medium. We refer to Jiang/Racke [96], Mu˜noz Rivera/Barreto [119], Mu˜noz Rivera/Racke [120] and to the literature quoted there for a mathematical analysis of a thermoelastic system as in (2.4.1). In this section we prove the existence of an exponentially attracting invariant manifold for the coupled system and show that this system can be reduced to a single hyperbolic equation with modified nonlinearity. This means that the temperature is a slave variable with respect to displacement and thus can be excluded from the model. The structure of the system in (2.4.1) does not allow us to apply the results established in Sects. 2.2 and 2.3. To cover the case, we need some modification of the scheme presented in Sect. 2.2. The approach developed in this section also makes it possible to consider the case of a nonlinear heat transfer on the free boundary of the body. This means that we can deal with nonlinear boundary conditions of the form
μ
∂θ ∂w w + (μ + λ )div w · n = 0, = h(w, θ ) for t > 0, x ∈ ∂ O, ∂n ∂n
where n is the outer unit normal vector and h is a Lipschitz function. This section is partially based on the paper Chueshov [38].
2.4.1 Abstract Form of the Model The thermoelastic systems such as (2.4.1) can be written in the following abstract form. Let X1 and X2 be infinite dimensional real Hilbert spaces. Our main object is the following system of differential equations Ut + ν A˜ 1U = F11 (U,V ) + F12 (V ), t > 0, in X1 , Vt + A˜ 2V = F2 (U,V ), t > 0, in X2 ,
(2.4.2)
where ν > 0 is a parameter. We impose the following hypotheses. Assumption 2.4.1. We suppose that 1. A˜ 1 is a positive linear self-adjoint operator on X1 with domain D(A˜ 1 ) generating the strongly continuous semigroup S1 ; A˜ 2 is a generator of strongly continuous group S2 on X2 possessing the properties S2 (t)V X2 ≤ e−γ t V X2 , t ≤ 0 and S2 (t)V X2 ≤ V X2 , t ≥ 0,
(2.4.3)
156
2 Master–Slave Synchronization via Invariant Manifolds
for every V ∈ X2 and a γ ≥ 0. Below, as in Chap. 1, we use the notation X1σ = D(A˜ σ1 ) for σ > 0. If σ < 0, then X1σ is the completion of X1 with respect to the norm A˜ σ1 · . 2. F11 and F2 are nonlinear mappings, F11 : X1α × X2 → X1 , F2 : X1α × X2 → X2 , where α ∈ [0, 1), and there exist constants L11 and L2 such that ˆ Vˆ )X1 ≤ L11 A˜ α1 (U − U) ˆ 2X + V − Vˆ 2X 1/2 . F11 (U,V ) − F11 (U, 1 2 and ˆ Vˆ )X2 ≤ L2 A˜ α1 (U − U) ˆ 2X + V − Vˆ 2X 1/2 , F2 (U,V ) − F2 (U, 1 2
(2.4.4)
U, Uˆ ∈ X1α and V, Vˆ ∈ X2 . −β 3. The mapping F12 : X2 → X1 possesses the property −β
A˜ 1
F12 (V ) − F12 (Vˆ ) X1 ≤ L12 V − Vˆ X2
for some 0 ≤ β ≤ 1 − α , where L12 is a positive constant. We note that to cover the thermoelasticity model above we need to include secondorder evolution equations in the framework. We can do this in the same way as in Sect. 2.2.4. For this we suppose X2 = D(B˜ 1/2 ) ×W0 , where W0 is a Hilbert space and ˜ In this case, we B is a positive linear self-adjoint operator on H2 with domain D(B). can take for the second equation of (2.4.2) an equation with coefficients 0 0 −id ˜ F2 (U,V ) = , A2 = ˜ , γ ≥ 0, . B γ f2 (u, w, wt ) i.e., the second equation of (2.4.2) is the first-order form of the hyperbolic equation ˜ = f2 (u, w, wt ), t > 0, in W0 , wtt + γ wt + Bw
(2.4.5)
where γ ≥ 0 is a parameter. The first equation of (2.4.2) and (2.4.5) is an abstract model for the thermoelasticity in (2.4.1). We note that if we assume that F11 is defined by a function 2 2 f11 : R1+d → R and f2 : R1+d → Rd are globally Lipschitz functions, then Assumption 2.4.1 holds for problem (2.4.1) with W0 = [L2 (O)]d , X1 = L2 (O) and α = β = 1/2 (O = (0, 1) if d = 1). The operators B˜ and A˜ 1 are given by the differential operations −μΔ − (μ + λ )∇ div (or −μ∂xx ) and −Δ with homogeneous Dirichlet boundary conditions. Since we do not assume any compactness properties concerning the resolvents of the operators A˜ i , problems such as (2.4.1) on unbounded domains can also be included within the scope of our theory after an appropriate redetermination of linear and nonlinear terms in the equations.
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
157
Our main goal is to show the possibility of master–slave synchronization in the problem in (2.4.2), which allows us to exclude a parabolic-type equation from our considerations. We prove that for ν large enough in the phase space X = X1 × X2 of the dynamical system generated by (2.4.2) there exists an invariant exponentially attracting manifold of the form M = {(m(V ),V ) : V ∈ X2 } ⊂ X, where m : X2 → X1 is a Lipschitz mapping (for the precise statement see Theorem 2.2.6). Thus, we observe master–slave synchronization in (2.4.1) with the temperature as a slave variable. We also note that in the case α = β = 0 we can apply Theorem 2.2.6. However, to cover our main case α + β = 1 we are enforced to modify the scheme of the Lyapunov–Perron method presented in Sect. 2.2.
2.4.2 Generation of a Dynamical System We start with the well-posedness of the system in (2.4.2) and rewrite it as a firstorder differential equation of the form d Y + AY = F(Y ), t > 0, dt
Y (0) = Y0 ,
where Y = Y (t) = (U(t),V (t)) and ν A˜ 1 0 F11 (U,V ) + F12 (V ) A= . , F(Y ) = F2 (U,V ) 0 A˜ 2 We consider the problem in (2.4.6) in the scale of the spaces X σ = X1σ × X2 ,
σ ∈ R,
We equip the spaces X σ with the norms 1/2 |Y |σ = A˜ σ1 u2X1 + V 2X2 , The Linear System The homogeneous linear problem d Y + AY = 0 dt
Y = (U,V ).
(2.4.6)
158
2 Master–Slave Synchronization via Invariant Manifolds
generates the strongly continuous semigroup S(t) in the spaces X σ for every σ . It is clear that 1 S (t) 0 S(t) = , (2.4.7) 0 S2 (t) where S2 (t) is a strongly continuous group in X2 generated by the equation Wt + A˜ 2W = 0, t > 0, in X2 .
ν A˜ 1 generates the strongly continuous semigroup S1 . We have the following dichotomy estimates. Lemma 2.4.2. Let Q be the orthoprojector in the space X σ on the first component, i.e., Q(U,V ) = (U, 0) and P = id − Q. Then, | S(t)PY |σ ≤ e−γ t |PY |σ ,
t ≤ 0, Y ∈ X σ ,
| S(t)PY |σ ≤ |PY |σ , t ≥ 0, Y ∈ X σ ,
σ σ | S(t)QY |σ ≤ + λ1σ e−νλ1 t |QY |0 , t > 0, Y ∈ X σ , σ > 0, νt
(2.4.8) (2.4.9) (2.4.10)
where λ1 > 0 is the minimal point of the spectrum of the operator A˜ 1 . Proof. By (2.4.7) S(t)PY = (0, S2 (t)V ), where Y = (U,V ). Hence, the relations in (2.4.8) and (2.4.9) follow by (2.4.3). Since S(t)QY = (S1 (t)u, 0) for Y = (U,V ), using the standard argument (see Temam [161, p. 511] or Chueshov [36, Lemma 2.1.1]) we obtain (2.4.10). We need the following operator analog of Lemma 2.2.8. Lemma 2.4.3. Let SL (t) be a strongly continuous semigroup in some Hilbert space H with the generator L = L ∗ > 0. Let λmin > 0 be the minimal point of the spectrum of L . Then, for any 0 ≤ β ≤ 1 and μ ≥ 0 the mapping f (t) → I β [ f ](t) :=
t −∞
(L + μ )β SL (t − τ ) f (τ )d τ
(2.4.11)
is continuous from L2 (R, H) into L2 (R, D(L 1−β )) and the estimate R
(L + μ )α I β [ f ](t)2H dt ≤
(λmin + μ )2(α +β ) 2 λmin
R
f (t)2H dt
(2.4.12)
holds for any 0 ≤ β ≤ 1, −β ≤ α ≤ 1 − β and μ ≥ 0. Proof. By a density argument it is sufficient to establish the estimate in (2.4.12) for smooth functions f . We assume that f ∈ C0∞ (R, D(L )). Then, the integral in (2.4.11) exists and it is a continuous function with values in H. A straightforward
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
159
calculation gives for all ω ∈ R, √ β [ f ](ω ) = 2π · I
= =
∞
t
dt −∞ ∞ τ
−∞
∞ −∞
e−iω t I β [ f ](t)dt
SL (t − τ )e−iω (t−τ ) e−iωτ (L + μ )β f (τ )d τ
−iω (t−τ ) SL (t − τ )e dt e−iωτ (L + μ )β f (τ )d τ
−∞ ∞
∞
= (L + iω )−1 e−iωτ (L + μ )β f (τ )d τ −∞ √ = 2π (L + iω )−1 (L + μ )β fˆ(ω ). For the fourth equality, see Pazy [127]. Consequently, β [ f ](ω ) = (L + iω )−1 (L + μ )α +β fˆ(ω ) (L + μ )α I H H (λ + μ )α +β (λmin + μ )α +β ˆ ˆ ≤ sup f (ω )H f (ω )H ≤ λ + iω λmin λ ≥λmin Indeed, the function x →
(x + μ )α +β x
is decreasing for x > 0 for the parameters given. Therefore, using the Plancherel formula, we obtain (2.4.12). As a corollary of Lemma 2.4.3, we obtain the following assertion. Lemma 2.4.4. Assume the hypotheses of Lemma 2.4.3 hold. Let f ∈ L2 (0, T, H) for some T > 0. Then, for the semigroup SL generated by L β
I0 [ f ](t) =
t 0
SL (t − τ )L β f (τ )d τ ∈ L2 (0, T, D(L 1−β ))
(2.4.13)
and the estimate T 0
β
L α I0 [ f ](t)2H dt ≤ (2T )2−2(α +β )
T 0
f (t)2H dt
(2.4.14)
holds for any 0 ≤ β ≤ 1, −β ≤ α ≤ 1 − β . β Proof. It is clear that I0 [ f ](t) = I β [ f](t) for t ∈ [0, T ] and μ = 0, where f(t) = f (t) for t ∈ [0, T ] and f(t) = 0 when t ∈ [0, T ]. Therefore, Lemma 2.4.3 implies (2.4.14) with α = 1 − β .
160
2 Master–Slave Synchronization via Invariant Manifolds β
Consider the case α = −β . It is clear that u(t) = L −β I0 [ f ](t) solves the problem ut + L u = f ,
u(0) = 0.
Indeed, L
−β
β I0 [ f ](t) =
t 0
L
−β
β
SL (t − τ )L f (τ )d τ =
t 0
SL (t − τ ) f (τ )d τ
where the right-hand side solves the above equation. Therefore, u(t)2H + 2
t 0
(L u, u)H d τ = 2
hence, u(t)2H ≤ 2
t
t 0
( f , u)H d τ ,
f (τ )H u(τ )H d τ ,
0
t ∈ [0, T ];
t ∈ [0, T ]
and thus T 0
u(τ )2H d τ ≤2
T t 0
≤(2T )2
0
f (τ )H u(τ )H d τ dt ≤ 2
1 T
2
0
f (τ )2H d τ +
1 T
2
0
T 0
f (τ )H u(τ )H
T τ
dtd τ
u(τ )2H d τ .
This implies the relation in (2.4.14) for α = −β . Now, using the interpolation estimate 1−α −β
L α vH ≤ cL −β vH
α +β
L 1−β vH
,
−β ≤ α ≤ 1 − β ,
we finally obtain (2.4.14) for any α ∈ [−β , 1 − β ].
Mild Solutions to the Nonlinear Problem The definition of mild solution given below is different from the notion introduced in Chap. 1 (see also Definition 2.2.3). Our intention to cover the critical case α + β = 1 enforces us to deal with L2 -type spaces L2 (0, T, X α ) instead of spaces of continuous functions. Definition 2.4.5. A function Y (t) ∈ L2 (0, T, X α ) is said to be a mild solution to the problem in (2.4.6) on the interval [0, T ] if Y (0) = Y0 and the relation Y (t) = R[Y ](t) := S(t)Y0 +
t 0
holds for almost all t ∈ [0, T ]. We have the following assertion.
S(t − τ )F(Y (τ ))d τ := S(t)Y0 + R0 [Y ](t) (2.4.15)
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
161
Theorem 2.4.6. Let Y0 ∈ X α −1/2 . The Cauchy problem (2.4.6) has a unique mild solution Y (t) on any interval [0, T ] provided that either α + β < 1 or α + β = 1 and ν > L12 , where L12 is the constant from Assumption 2.4.1(3). Moreover, if α −1/2 ≤ σ0 ≤ σ < min(1 − β , 1/2), then Y (t) ∈ C((0, T ], X σ ) and
|Y (t)|σ ≤ CT · t −σ +σ0 , t ∈ (0, T ],
(2.4.16)
for Y0 ∈ X σ0 , where CT is a constant. If σ0 = σ , then Y (t) ∈ C([0, T ], X σ ). Proof. We will use a fixed point argument. 1. We first prove that the equation in (2.4.15) has a unique solution in the space L2 (0, T, X α ) for T > 0 small enough. By the energy inequality of the Q-component of the linear problem to (2.4.15) 2ν
t 0
1/2
A˜ 1 S1 (τ )u0 2X1 d τ ≤ u0 2X1 ,
u0 ∈ X1 ,
it is easy to see that S(t)Y0 ∈ L2 (0, T, X α ) provided that Y0 ∈ X α −1/2 . Let P and Q be the same projections as in Lemma 2.4.2. It follows from Lemma 2.4.2 and from (2.4.4) that |PR[Y1 ](t) − PR[Y2 ](t)|α ≤ L2
t 0
|Y1 (τ ) −Y2 (τ )|α d τ
for any Y1 ,Y2 ∈ L2 (0, T, X α ). Therefore, |PR[Y1 ] − PR[Y2 ]|L2 (0,T,X α ) ≤ L2 · T · |Y1 −Y2 |L2 (0,T,X α ) .
(2.4.17)
We also have that QR[Y1 ] − QR[Y2 ] = I00 [F1 (Y1 ) − F1 (Y2 )] +
1 β β I [Δ F12 ], νβ 0
β
where I0 is given by (2.4.13) with L = ν A˜ 1 and H = X1 , Yi = (Ui ,Vi ) are elements from L2 (0, T, X α ), and β
−β
Δ F12 := A˜ 1 (F12 (V1 ) − F12 (V2 )) . Consequently, from Lemma 2.4.4 and from the hypotheses in Assumption 2.4.1(2,3), we obtain that |QR[Y1 ] − QR[Y2 ]|L2 (0,T,X α ) ≤ q(T, ν ) · |Y1 −Y2 |L2 (0,T,X α ) , where q(T, ν ) =
L11 (2T )1−α L12 (2T )1−(α +β ) + . να ν α +β
(2.4.18)
162
2 Master–Slave Synchronization via Invariant Manifolds
Therefore, it follows from (2.4.17) and (2.4.18) that |R[Y1 ] − R[Y2 ]|L2 (0,T,X α ) ≤ (L2 T + q(T, ν )) · |Y1 −Y2 |L2 (0,T,X α ) . Thus, in the case α + β < 1 for every ν > 0 we can choose T0 independently of Y0 such that l := L2 T0 + q(T0 , ν ) < 1. If α + β = 1, then we can make this choice only if ν > L12 . In any case, the equation in (2.4.15) has a unique solution Y (t) on the interval [0, T0 ] from the class L2 (0, T0 , X α ). 2. We claim that Y (t) satisfies (2.4.16) with T = T0 . Indeed, it is sufficient to prove that R0 [Y ](t) ∈ C([0, T0 ], X σ )
(2.4.19)
with σ < min(1 − β , 1/2). We have that, for t1 > t2 , R0 [Y ](t1 ) − R0 [Y ](t2 ) =
t1 t2
+
S(t1 − τ )F(Y (τ ))d τ
t2 0
(2.4.20) S(t2 − τ ) [S(t1 − t2 ) − id] F(Y (τ ))d τ .
Let P and Q be the same as in Lemma 2.4.2. Then, by (2.4.9) we obtain that |PR0 [Y ](t1 ) − PR0 [Y ](t2 )|1 ≤ C
t1
+
t2
t2 0
(1 + |Y (τ )|α ) d τ |[S(t1 − t2 ) − id] PF(Y (τ ))|0 d τ
because P is a projection on X2 . Therefore, the Lebesgue convergence theorem implies that PR0 [Y ](t) ∈ C([0, T0 ], X 1 ).
(2.4.21)
Consequently, using (2.4.7) it is easy to see that PY (t) ∈ C([0, T0 ], X 1 )
and
max |PY (t)|1 ≤ CT .
t∈[0,T0 ]
(2.4.22)
Thus, we need only to check the continuity of the following functions Q1 (t) =
t 0
S(t − τ )F11 (Y (τ ))d τ ,
Q2 (t) =
t 0
S(t − τ )F12 (V (τ ))d τ .
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
163
Here, as above, Y = (U,V ). Using a representation similar to the relation in (2.4.20) and Lemma 2.4.2, we obtain by Chueshov [34] that t 1 σ σ ˜ ˜ A1 (Q1 (t1 ) − Q1 (t2 ))X1 = A1 S(t1 − τ )F11 (Y (τ ))d τ t2
X
1 t 2 σ +ε −ε ˜ ˜ [S(t1 − t2 ) − id] S(t2 − τ )F11 (Y (τ ))d τ A A + 1 0 1
X1
t2
dτ ≤ C1 (1 + |Y (τ )|α ) |t1 − τ |σ t1 ε · +C2 A˜ − [S(t − t ) − id] 1 2 1 X 1
0
t2
(1 + |Y (τ )|α )
(2.4.23)
dτ |t2 − τ |σ +ε
for any 0 ≤ t2 < t1 , σ < 1/2 and 0 < ε < 1/2 − σ . Therefore, since A˜ σ1 S(t) ≤ ct −σ , ε A˜ − 1 (id − S(t)) ≤ sup
λ >0
% 1 − e−νλ t (ν t)ε = cε (ν t)ε (νλ t)ε
for all t > 0, from the H¨older inequality we obtain that Q1 ∈ C([0, T0 ], X1σ ). Similarly, since by (2.4.22) and Assumption 2.4.1(3) −β
A˜ 1 F12 (V (τ ))X1 ≤ C
for all t ∈ [0, T0 ],
we conclude that t1
dτ |t1 − τ |σ +β ε 1 · S +C2 A˜ − (t − t ) − id 1 2 1 X
A˜ σ1 (Q2 (t1 ) − Q2 (t2 ))X1 ≤ C1
t2
1
0
t2
dτ . |t2 − τ |σ +β +ε
Thus, Q2 ∈ C([0, T0 ], X1σ ) for any σ < 1 − β and ε > 0 sufficiently small. Since QR0 [Y ](t) = Q1 (t) + Q2 (t), using (2.4.21) we obtain (2.4.19). Thus, the unique mild solution to problem (2.4.6) possesses property (2.4.16) on the existence interval [0, T0 ]. 3. Since T0 does not depend on the initial data Y0 , we can repeat the same procedure on the interval [T0 , 2T0 ], and so on. This implies the conclusion of Theorem 2.4.6. Remark 2.4.7. If α < 1 − β , then we can prove that Y (t) ∈ C((0, T ], X σ )
for any
σ < 1−β.
(2.4.24)
164
2 Master–Slave Synchronization via Invariant Manifolds
The point is that in this case using a Gronwall-type argument (see, for example, Sect. 1.2.4 or Henry [92, Sect.7.1]) we can prove that |Y (t)|α ≤ CT · t −1/2 ,
Y0 ∈ X α −1/2 , t ∈ (0, T ].
Using this relation we obtain from (2.4.23) that Q1 ∈ C([0, T0 ], X1σ ) for σ < 1. Thus, (2.4.24) holds. Theorem 2.4.6 implies that for any α − 1/2 ≤ σ < min(1 − β , 1/2) the problem in (2.4.6) generates a dynamical system φ in the space X σ = X1σ × X2 by the formula
φ (t,Y ) = Y (t) = (U(t),V (t)),
(2.4.25)
where (U(t),V (t)) is a mild solution to (2.4.2) with the initial data Y = (U0 ,V0 ). Remark 2.4.8 (Drive-Response Synchronization). As in a more regular case (see Proposition 2.2.5) we can give conditions that guarantee the drive-response synchronization in the system in (2.4.2) (in the sense of Definition 2.1.2). In this case, a drive-response system has the form ¯ (t)) = f (t) := F12 (V (t)), t > 0, in X1 , U¯t + ν A˜ 1U¯ − F11 (U,V
(2.4.26)
where V (t) is the V -component of a solution Y = (U,V ) to (2.4.2). By Theorem 2.4.6 for Y (0) ∈ X α the function V (t) belongs to C(R+ , X2 ) and the right-hand −β side f lies in C(R+ , X1 ). This allows us to apply the same idea as in the proof of ¯ Theorem 2.4.6 to show that for any given initial data U(0) ∈ X1α problem (2.4.26) α has a unique mild solution from the class L2 (0, T, X1 ) for every T > 0. Now we ¯ This difference is a mild solution to the problem consider the difference z = U − U. ¯ (t)), t > 0, in X1 , zt + ν A˜ 1 z = F11 (U,V (t)) − F11 (U,V Since the singular terms F12 (V (t)) are canceled, we can show that z(t) belongs to the class C([0, T ], X1α ) for every T > 0. This allows us to use the regularizing formula in (2.4.10) to the decaying estimate for z(t). Indeed, from (2.4.10) and the mild form of the problem we have that
α t α − νλ t − νλ (t− τ ) α 1 1 z(t)X1α ≤ e z(0)X1α + L11 e + λ1 z(τ )X1α d τ . ν (t − τ ) 0 Let 0 < μ < νλ1 . Then, eμ t z(t)X1α ≤e−(νλ1 −μ )t z(0)X1α
α α e−(νλ1 −μ )(t−τ ) + λ1α eμτ z(τ )X1α d τ ν (t − τ ) 0 # $ μτ ≤z(0)X1α + b(t) sup e z(τ )X1α , t
+ L11
0≤τ ≤t
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
165
where t
b(t) = L11
−(νλ1 −μ )(t−τ )
e 0
α ν (t − τ )
α
+ λ1α
dτ .
The standard calculations (see, for example, Chueshov [36, 40] or Temam [161] show that
b(t) ≤ L11 κα ν −α (νλ1 − μ )−1+α + λ1 (νλ1 − μ )−1 ,
where κα = α α 0∞ τ −α e−τ d τ for 0 < α < 1 and κ0 = 0. Thus, if this expression is smaller than 1 then we can choose a positive μ such that sup eμτ z(τ )X1α ≤ Cz(0)X1α .
τ ∈R+
This implies the drive-response synchronization in system (2.4.2) with V as a synchronizing coordinate.
2.4.3 Invariant Manifold Now we are in a position to prove the main result of this section. Theorem 2.4.9. Let Assumption 2.4.1 be in force and ν > ν0 , where
ν0 =
γ 2 α +β L2 + λ1α L11 + λ1 L12 + λ1 λ1
(2.4.27)
and λ1 > 0 is the minimal point of the spectrum of the operator A˜ 1 . Then there exists a mapping m : X2 → X1 such that A˜ σ1 (m(V1 ) − m(V2 ))X1 ≤ Cσ V1 −V2 X2 ,
(2.4.28)
for all V1 ,V2 ∈ X2 and for any σ satisfying the inequality
α − 1/2 < σ < min(1 − β , 1/2),
(2.4.29)
where Cσ > 0 is a constant. Moreover, the manifold M = {(m(V ),V ) : V ∈ X2 } ⊂ X σ ,
(2.4.30)
is invariant with respect to the semigroup φ given by (2.4.25) in the space X σ = X1σ × X2 , i.e., φ (t, M ) ⊆ M . This manifold M is exponentially attracting in the following sense: there exists C > 0 such that for any mild solution Y to problem (2.4.6) there exists Y ∗ ∈ M such that ∞ 0
e2μ t |Y (t) − φ (t,Y ∗ )|2α dt < C 1 + |Y (0)|2σ ,
(2.4.31)
166
2 Master–Slave Synchronization via Invariant Manifolds
and also |Y (t) − φ (t,Y ∗ )|σ < Ce−μ t (1 + |Y (0)|σ ) ,
t > 0,
(2.4.32)
where μ = (γ + νλ1 )/2. Moreover, we observe exponential master–slave synchronization in the form U(t) − m(V (t))X1 < Ce−μ t 1 + A˜ σ1 U(0)X1 + V (0)X2 , t > 0, (2.4.33) for every solution (U,V ) to problem (2.4.2). The rest of this subsection is devoted to the proof of Theorem 2.4.9. Construction of M As in Sect. 2.2, to construct an invariant manifold we should first solve the integral equation Y (t) = TW [Y ](t),
t ≤ 0,
(2.4.34)
where TW [Y ] = IW [F(Y )]. Here, IW [Y ] is given by IW [Y ](t) = S(t)W −
0 t
S(t − τ )PY (τ )d τ +
t −∞
S(t − τ )QY (τ )d τ
and W ∈ PX α = X2 , where P = id − Q and Q is defined in the statement of Lemma 2.4.2. We consider the equation in (2.4.34) and the operators TW and IW in the spaces Yα = Y : eμ t Y ∈ L2 (−∞, 0, X α ) , where μ ∈ (γ , νλ1 ) will be chosen later, with the norm |Y |Yα =
0 −∞
e2μ t |Y (t)|2α dt
1/2 .
Proposition 2.4.10. For every W ∈ PX α the operator IW is a continuous mapping from Y0 into Y1 and for any Y1 ,Y2 ∈ Y0 and σ ∈ [0, 1] we have that |IW [PY1 ] − IW [PY2 ]|Yσ ≤ and |IW [QY1 ] − IW [QY2 ]|Yσ ≤
1 · |PY1 − PY2 |Y0 μ −γ
λ1σ · |QY1 − QY2 |Y0 . νλ1 − μ
(2.4.35)
(2.4.36)
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
167
Proof. We use the same idea as in Proposition 2.2.9. Since IW [Y1 ] − IW [Y2 ] = I0 [Y1 −Y2 ], to obtain (2.4.35) and (2.4.36) we need only to estimate the values |I0 [PY ]|Yσ and |I0 [QY ]|Yσ for any Y ∈ Y0 and σ ∈ [0, 1]. Since |PY |σ = |PY |0 , using Lemma 2.4.2, we have that eμ t |I0 [PY ](t)|σ ≤
0 t
e(μ −γ )(t−τ ) · eμτ |PY (τ )|0 d τ ,
t ≤ 0.
Therefore, applying Lemma 2.2.8 with δ = μ − γ and f (t) defined by the relations: f (t) = eμ t |PY (t)|0 for t ≤ 0 and f (t) = 0 for t > 0, we obtain that |I0 [PY ]|2Yσ ≤
1 · |PY |2Y0 ( μ − γ )2
for any Y ∈ Y0 , σ ∈ [0, 1].
(2.4.37)
In a similar way, using Lemma 2.4.3 with β = 0, α = σ and L = ν A˜ 1 − μ , it is easy to see that |I0 [QY ]|2Yσ ≤
λ12σ · |QY |2Y0 (νλ1 − μ )2
for any Y ∈ Y0 , σ ∈ [0, 1].
(2.4.38)
Relations (2.4.37) and (2.4.38) imply (2.4.35) and (2.4.36). The continuity of the mapping IW from Y0 into Y1 follows from (2.4.35) and (2.4.36) and the relation |IW [Y1 ] − IW [Y2 ]|2Yσ = |IW [PY1 ] − IW [PY2 ]|2Yσ + |IW [QY1 ] − IW [QY2 ]|2Yσ .
Now we consider the integral operator TW defined in the relation in (2.4.34). ˜ )] for Y ∈ Yα . Then, Proposition 2.4.11. Let γ < μ < νλ1 and TW [Y ] = IW [F(Y for every W ∈ PX α = X2 , the operator TW is continuous from Yα into itself and |TW1 [Y1 ] − TW2 [Y2 ]|Yα ≤ W1 −W2 X2 + ηα (ν , μ ) · |Y1 −Y2 |Yα
(2.4.39)
for every W1 ,W2 ∈ PX α and Y1 ,Y2 ∈ Yα , where
ηα (ν , μ ) =
α +β
λ α L11 + λ1 L12 L2 + 1 . μ −γ νλ1 − μ
Proof. We rewrite TW in the form TW [Y ] = IW [F0 (Y )] +
1 −μ t β e I [ f (Y )], 0 , νβ
(2.4.40)
168
2 Master–Slave Synchronization via Invariant Manifolds
where Y = (U,V ) ∈ Yα , F0 (Y ) = (F11 (Y ), F2 (Y )), I β is given by (2.4.11) with L = ν A − μ and −β f (Y,t) := A˜ 1 eμ t F12 (V (t)). Since IW1 [F0 (Y1 )] − IW2 [F0 (Y2 )] = S(·)(W1 −W2 ) + I0 [F0 (Y1 ) − F0 (Y1 )], using (2.4.35) and (2.4.36) we obtain that
λ1α L11 L2 |IW1 [F0 (Y1 )] − IW2 [F0 (Y2 )]|Yα ≤ W1 −W2 X2 + |Y1 −Y2 |Yα . + μ − γ νλ1 − μ It follows from Lemma 2.4.3 with L = ν A − μ and from Assumption 2.4.1(3) that
1 ν 2β
2 1/2 λ α +β L 12 ≤ 1 · |Y1 −Y2 |Yα . I β [ f (Y1 ) − f (Y2 )](t) dt νλ1 − μ α −∞
0
These two inequalities imply (2.4.39). Proposition 2.4.12. Assume that ν > ν0 , where ν0 is given by (2.4.27). Let μ = (γ + νλ1 )/2. Then, for any W ∈ X2 the problem (2.4.34) has a unique solution Y (·) = Y (·,W ) in the space Yα . This solution possesses the properties Y ∈ C((−∞, 0], X σ ),
σ < min(1 − β , 1/2),
(2.4.41)
and sup eμ t |Y (t,W1 ) −Y (t,W2 )|σ ≤ Cσ |W1 −W2 |0
(2.4.42)
t≤0
for any W1 ,W2 ∈ PX α , where Cσ is a positive constant. Moreover, for every s ∈ (−∞, 0) and for almost all t ∈ [s, 0] the function Y satisfies the relation Y (t) = S(t − s)Y (s) +
t s
S(t − τ )F(Y (τ ))d τ .
(2.4.43)
Proof. By Proposition 2.4.11 we have that |TW [Y1 ] − TW [Y2 ]|Yα ≤ q · |Y1 −Y2 |Yα for every Y1 ,Y2 ∈ Yα , where q := ηα (ν , (γ + νλ1 )/2) < 1. Thus, by the contraction principle, the equation in (2.4.34) has a unique solution from Yα . To establish (2.4.41) we use an argument similar to one given in the proof of Theorem 2.4.6. Relation (2.4.43) can be easily obtained by direct calculation. Thus, we need only to prove estimate (2.4.42). Let Yi (t) = Y (t,Wi ), i = 1, 2. Since Y1 (t) −Y2 (t) = TW1 [Y1 ](t) − TW2 [Y2 ](t) = S(t)(W1 −W2 ) + [T0 [Y1 ](t) − T0 [Y2 ](t)] ,
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
169
we obtain that |PY1 (t) − PY2 (t)|σ ≤ e−γ t |Y1 −Y2 |0 + L2
0 t
e−γ (t−τ ) |Y1 (τ ) −Y2 (τ )|α d τ .
Therefore, eμ t |PY1 (t) − PY2 (t)|σ ≤ |W1 −W2 |0 0
+L2
t
e(μ −γ )(t−τ ) · eμτ |Y1 (τ ) −Y2 (τ )|α d τ .
From the H¨older inequality we obtain μt
e |PY1 (t) − PY2 (t)|σ ≤ |W1 −W2 |0 + L2
0
2(μ −γ )(t−τ )
e t
1/2 dτ
|Y1 −Y2 |Yα .
This implies that sup eμ t |PY1 (t) − PY2 (t)|σ ≤ |W1 −W2 |0 + t≤0
L2 · |Y1 −Y2 |Yα . (2.4.44) (νλ1 − γ )1/2
Similarly, |QY1 (t) − QY2 (t)|σ ≤ L11
t −∞ t
+ L12
−∞
A˜ σ1 S(t − τ ) · |Y1 (τ ) −Y2 (τ )|α d τ σ +β
A˜ 1
S(t − τ ) · |PY1 (τ ) − PY2 (τ )|0 d τ .
Therefore, eμ t |QY1 (t) − QY2 (t)|σ ≤ a1 (t) · |Y1 −Y2 |Yα + a2 (t) · sup eμ t |PY1 (t) − PY2 (t)|σ , t≤0
where a1 (t) = L11
a2 (t) = L12
t
−∞ t −∞
A˜ σ1 S(t − τ )2 d τ σ +β
A˜ 1
1/2 ,
S(t − τ )d τ .
Using Lemma 2.4.2, we can see that a1 (t) ≤ a1 =
1/2 2κ2σ L11 2(νλ1 )2σ · + , νσ (νλ1 − γ )1−2σ νλ1 − γ
170
2 Master–Slave Synchronization via Invariant Manifolds
and a2 (t) ≤ a2 =
2L12 σ +β σ +β , κ ( νλ − γ ) + ( νλ ) · 1 1 σ +β ν σ +β (νλ1 − γ )
where κs = ss 0∞ τ −s e−τ d τ for 0 < s < 1 and κ0 = 0. Consequently, from (2.4.44) we find that sup eμ t |Y1 (t) −Y2 (t)|σ ≤ b0 |W1 −W2 |0 + b1 · |Y1 −Y2 |Yα , (2.4.45) t≤0
where b0 = 1 + a2 and b1 = a1 + (1 + a2 )(νλ1 − γ )−1/2 . From (2.4.39) it is easy to see that |Y1 −Y2 |Yα ≤ (1 − q)−1 |W1 −W2 |0 . Therefore, (2.4.45) implies (2.4.42). Now we define 0
m(W ) =
−∞
S(τ ) F˜11 (Y (τ )) + F12 (V (τ )) d τ = Y (0) −W,
(2.4.46)
where Y = (U,V ) solves the integral equation in (2.4.34). It is clear that m : X2 → D(Aσ ) for σ < min(1 − β , 1/2). Proposition 2.4.12 implies that Aσ (m(W1 ) − m(W2 ))X1 ≤ CW1 −W2 X2 . As in Sect. 2.2, we can conclude that the manifold M generated by this m according to (2.4.30) is forward invariant. Thus, to complete the proof of Theorem 2.4.9, we need only to prove the tracking properties (2.4.31), (2.4.32), and (2.4.33).
Tracking Properties We use the same method as in Sect. 2.2 for proof of the tracking properties. Let Y0 = (U0 ,V0 ) ∈ X σ , where σ satisfies (2.4.29), and let Y (t) be a mild solution to (2.4.6). We extend Y (t) on the semi-axis (−∞, 0] by the formula Y (t) = ((1 + |t|A)−1U0 ,V0 ). It is easy to see that Y |(−∞,0] ∈ C((−∞, 0], X σ )
and
|Y |2Yα =
0 −∞
e2μ t |Y (t)|2α dt ≤ C|Y0 |2σ .
In the space Zα = Z(t) : |Z|2Zα :=
∞ −∞
2μ t
e
|Z(t)|2α dt
% 0 where TW is defined in (2.4.34). We have that Z0 ∈ Zα and |Z0 |Zα ≤ C (1 + |Y (0)|σ ) and sup eμ t |Z0 (t)|σ ≤ C (1 + |Y (0)|σ )
(2.4.47)
t∈R
for any σ satisfying (2.4.29). To see that the second part of (2.4.47) holds recall that 0 < μ < νλ1 . We define an integral operator R : Zα → Zα by the formula t
R[Z](t) =Z0 (t) + −
∞ t
−∞
S(t − τ )Q [F(Z(τ ) +Y (τ )) − F(Y (τ ))] d τ
S(t − τ )P [F(Z(τ ) +Y (τ )) − F(Y (τ ))] d τ .
To state the tracking property, we first prove two lemmas. We start with Lemma 2.4.13. R is a contraction in Zα . Proof. Since by (2.4.4) and (2.4.8) we have for Z1 , Z2 ∈ Zα |P (R[Z1 ](t) − R[Z2 ](t)) |α ≤ L2 e−μ t
∞ t
e(μ −γ )(t−τ ) · eμτ |Z1 (τ ) − Z2 (τ )|α d τ ,
using Lemma 2.2.8 we obtain that |P (R[Z1 ] − R[Z2 ]) |Zα ≤
L2 · |Z1 − Z2 |Zα . μ −γ
Similarly, Q (R[Z1 ](t) − R[Z2 ](t)) = Q1 (t) + Q2 (t), where Q1 (t) = Q2 (t) =
t −∞
t
−∞
S(t − τ ) [F11 (Z1 (τ ) +Y (τ )) − F11 (Z2 (τ ) +Y (τ ))] d τ , β −β S(t − τ )A˜ 1 Q A˜ 1 ΔY F12 (Z1 (τ ), Z2 (τ )) d τ
with
ΔY F12 (Z1 (τ ), Z2 (τ )) = F12 (P[Z1 (τ ) +Y (τ )]) − F12 (P[Z2 (τ ) +Y (τ )]).
172
2 Master–Slave Synchronization via Invariant Manifolds
Consequently, it follows from Lemma 2.4.3 with L = ν A˜ 1 − μ that |Q1 |Zα ≤
λ1α L11 |Z1 − Z2 |Zα νλ1 − μ
and
α +β
|Q2 |Zα ≤
λ1 L12 |Z1 − Z2 |Zα . νλ1 − μ
for every
Z1 , Z2 ∈ Zα .
Since μ = (γ + νλ1 )/2, we obtain that |R[Z1 ] − R[Z2 ]|Zα ≤ q · |Z1 − Z2 |Zα
(2.4.48)
Here, q = ηα (ν , (γ + νλ1 )/2) < 1 under the condition ν > ν0 , where ηα and ν0 are given by (2.4.40) and (2.4.27). Thus, by the contraction principle, there exists a unique solution Z ∈ Zα to the equation Z = R[Z]. Since Z(t) = R[Z](t) = Z0 (t) + R[Z](t) − R[0](t), we obtain from (2.4.47) and (2.4.48) that |Z|Zα ≤ (1 − q)−1 · |Z0 |Zα ≤ C (1 + |Y (0)|σ ) .
(2.4.49)
Using the same method as in the proof of Proposition 2.4.12 and relations (2.4.47) and (2.4.48), we can prove that Z(t) ∈ C(R, X σ ) and sup eμ t |Z(t)|σ ≤ C (1 + |Y (0)|σ ) (2.4.50) t∈R
for any σ satisfying (2.4.29). The following assertion implies that Z(t) +Y (t) is the desired induced trajectory for Y (t). Lemma 2.4.14. Let Y (t) = Z(t) + Y (t), where Z ∈ Zα solves the equation Z = R[Z]. Then Y (t) =
⎧ ⎨ TPY(0) [Y ](t), if t ≤ 0; ⎩
φ (t, Y (0)),
(2.4.51) if t > 0.
The proof follows Sect. 2.2.3, where the tracking property is written in detail. Now we are in position to complete the proof of the tracking property: We have the graph m of M by (2.4.46). By Lemma 2.4.14 we have that Y (t) = φ (t, Y (0)) ∈ M for t ≥ 0. Therefore, since Y (t) − Y (t) = Z(t), the relations in (2.4.31) and (2.4.32) follow from (2.4.49) and (2.4.50). The master–slave inequality in (2.4.33) follows from (2.4.32) in the same manner as in Theorem 2.2.6.
2.4 Parabolic–Hyperbolic Systems with Singular Terms and Thermoelasticity
173
2.4.4 Reduced System Theorem 2.4.9 allows us to introduce the reduced (inertial) form, which describes limiting synchronized regime. Let the hypotheses of Theorem 2.4.9 hold and m be given (2.4.46). Consider the problem Vt + A˜ 2V = F2 (m(V ),V ), t > 0, in X2 , (2.4.52) V |t=0 = V0 , and define its mild solution on the interval [0, T ] as a function V (t) ∈ C([0, T ], X2 )
(2.4.53)
A˜ α1 m(V (t))2X1 dt < ∞
(2.4.54)
such that T 0
and V (t) = S2 (t)V0 +
t 0
S2 (t − τ )F2 (m(V (τ )),V (τ ))d τ
for almost all t ∈ [0, T ], where S2 is the evolution group generated by (2.4.2). Proposition 2.4.15. Let V0 ∈ X2 . Then, under the conditions of Theorem 2.4.9, the problem in (2.4.52) has a mild solution on any interval [0, T ]. If α < min(1 − β , 1/2), then this solution is unique and generates a mild solution to the problem in (2.4.2) by the formula (U(t),V (t)) = (m(V (t)),V (t)).
(2.4.55)
Moreover, in this case, the manifold M is strictly invariant with respect to the semigroup S generated by (2.4.2). Proof. Let Y (t) = (U,V ) be a mild solution to the problem in (2.4.6) with the initial data Y0 = (m(V0 ),V0 ). Since M given by (2.4.30) is invariant, we have that PY (t) = V (t) and
QY (t) = m(V (t)).
By Theorem 2.4.6, V (t) possesses property (2.4.53). We also have that T 0
|QY (t)|2α dt ≤
T 0
|Y (t)|2α dt < ∞.
Thus, (2.4.54) holds. Consequently, V is a mild solution to (2.4.52).
174
2 Master–Slave Synchronization via Invariant Manifolds
If α < min(1 − β , 1/2), then by Theorem 2.4.9 with σ = α we have that Aα (m(V1 ) − m(V2 ))X1 ≤ CV1 −V2 X2 ,
Vi ∈ X2 .
This implies that the function ˜ m (V ) := F(m(V ),V ), V → PF
V ∈ X2 ,
is globally Lipschitz, i.e., Fm (V1 ) − Fm (V2 ))X2 ≤ CV1 −V2 X2 ,
Vi ∈ X2 .
(2.4.56)
Therefore, a Gronwall-type argument gives us the uniqueness of solutions to (2.4.52). Relation (2.4.55) easily follows from the uniqueness theorem for (2.4.52). The property in (2.4.56) also makes it possible to solve (2.4.52) in a backward direction and, hence, to prove that M is strictly invariant with respect to S. Theorem 2.4.9 implies that for any mild solution Y = (U,V ) to the problem in (2.4.2) with initial data Y0 ∈ X σ , where σ satisfies (2.4.29), there exists a mild solution Vˆ (t) to the reduced problem in (2.4.52) such that V (t) − Vˆ (t)2X2 + U(t) − m(Vˆ (t))2X1 ≤ Ce−μ t for any t ≥ 0 with positive constants C and μ . Thus, under the conditions of Theorem 2.4.9, the long-time behavior of solutions to (2.4.2) can be described completely by solutions to the problem in (2.4.52). Moreover, under the condition α < min(1 − β , 1/2), because of the relation in (2.4.55), every limiting regime of the reduced system in (2.4.52) is realized in the coupled system in (2.4.2). Another form of reduction principle can be established for global attractors (see Definition 1.2.5). In the case when the second equation in (2.4.2) has a wave form like (2.4.5) the existence of a global attractor can be guaranteed if we assume that γ > 0 and the nonlinear terms F11 , F2 and F12 satisfy some (structural) conditions. Theorem 2.4.16 (Reduction Principle). Let the hypotheses of Theorem 2.4.9 hold and α < min(1 − β , 1/2). Assume that for some σ satisfying (2.4.29) the dynamical system φ on X σ generated by the problem in (2.4.2) possesses a global attractor A. Then, the reduced system φ r on X2 given by (2.4.52) also has a global attractor Ar in X2 and (2.4.57) A = {Y = (m(V0 ),V0 ) : V0 ∈ Ar } . Conversely, if the dynamical system φ r generated by (2.4.52) has a global attractor Ar , then the relation in (2.4.57) defines a global attractor for the system φ . Proof. It follows from (2.4.32) that A ⊂ M . Therefore, (2.4.57) holds with Ar = {V0 ∈ X2 : (U0 ,V0 ) ∈ A} . Let us prove that Ar is a global attractor for φ r .
2.5 Synchronization in Higher Modes and Inertial Manifold
175
It is clear that Ar is compact. Since φ (t,Y0 ) = (m(φ r (t,V0 )), φ r (t,V0 )) for any V0 ∈ X2 , where Y0 = (m(V0 ),V0 ), we can see that Ar is invariant. The relation dX2 (φ r (t,V0 ), Ar ) ≤ dX σ (φ (t,Y0 ), A) implies that sup dX2 (φ r (t,V0 ), Ar ) → 0
V0 ∈D
as t → ∞.
Thus, Ar is a global attractor for φ r . To prove the second statement of Theorem 2.4.16, we note that if Ar is a global attractor for φ r , then A given by (2.4.57) is a compact and invariant (with respect to φ ) set. Moreover, A is a global attractor for the restriction (M , φ ) of the system φ on the invariant manifold M . Therefore, (2.4.32) implies that A is a global attractor for the system φ on X σ . Remark 2.4.17. In the case α + β < 1 the result of Theorem 2.4.9 can be improved. We can prove that there exists ν0∗ ≥ ν0 such that for every ν > ν0∗ the Lipschitz property (2.4.28) holds for all σ < 1 − β and the tracking property (2.4.32) is true for σ = α . We also note that this improvement of Theorem 2.4.9 makes it possible to replace the assumption α < min(1 − β , 1/2) in Proposition 2.4.15 and Theorem 2.4.16 with the assumption α < 1 − β . We do not give details here because our primary goal was to cover the case α + β = 1, which is important from the point of view of thermoelastic problems.
2.5 Synchronization in Higher Modes and Inertial Manifold The result stated in Theorem 2.4.9 can be applied in the following situation. In a separable Hilbert space X, consider the problem du + Au = F(u), u(0) = u0 , dt
(2.5.1)
where 1. A is a positive linear symmetric operator in X with the discrete spectrum, i.e., there exists an orthonormal basis {ek } of H such that Aek = λk ek
with 0 < λ1 ≤ λ2 ≤ . . . ,
lim λk = ∞.
k→∞
2. F is a nonlinear locally Lipschitz mapping from X α into X for some 0 < α < 1/2, i.e., there exists a constant LF such that F(u) − F(v) ≤ LF u − vα , ∀ u, v ∈ X α . Here, X α = D(Aα ) endowed with the graph norm.
176
2 Master–Slave Synchronization via Invariant Manifolds
One can see that under the conditions above for every u0 ∈ X α the problem in (2.5.1) has a unique mild solution in the class C(R+ , X α ). Let PN be the orthogonal projector in X on Span {e1 , . . . , eN } and QN = id − PN . We rewrite the problem in (2.5.1) in the form dU ˜ + A1U = QN F(U +V ) = F11 (U,V ) in QX, dt dV + A˜ 2V = PN F(U +V ) = F2 (U,V ) in PX, dt where U(t) = QN u(t) and V (t) = PN u(t), AQN =: A˜ 1 and APN =: A˜ 2 . Straightforwardly, A˜ 2 generated a group on X2 . We can consider this system as a master–slave pair with V as a master variable. This allows us to apply Theorem 2.4.9 in spaces X1 = QN X α and X2 = PN X with the following choice of parameters: L11 = LF , L2 = λNα LF , γ = λN , L12 = 0, β = 0, ν = 1. The minimal point of the spectrum of A in X1 = QN X is λN+1 . Therefore, the parameter ν0 in (2.4.27) has the form
ν0 =
λN 2LF α α . + λN + λN+1 λN+1 λN+1
Thus, Theorem 2.4.9 implies the following assertion on synchronization of higher modes by lower. This assertion is well-known in the literature as the theory of inertial manifolds (see, for example, Chueshov [36], Foias/Sell/Temam [79], Miklavˇciˇc [113], Temam [161] and the references therein). Theorem 2.5.1. Assume that the following spectral gap condition
α λN+1 − λN ≥ 2LF λNα + λN+1 holds for some N. Then there exists a mapping m : PN X → QN X α such that Aα (m(p1 ) − m(p2 )) ≤ Cp1 − p2 X , ∀ p1 , p2 ∈ X2 , and the manifold M = {p + m(V ) : p ∈ X2 } is invariant with respect to the semigroup φ generated by (2.5.1) in the space X α = D(Aα ), i.e., φ (t, M ) ⊆ M . This manifold M is exponentially attracting in the sense that for every u0 ∈ X α there exists u∗0 ∈ M such that ∞ 0
e2μ t φ (t, u0 ) − φ (t, u∗0 )2α dt < C 1 + u0 2α ,
2.5 Synchronization in Higher Modes and Inertial Manifold
177
and also φ (t, u0 ) − φ (t, u∗0 )α < Ce−μ t (1 + u0 α ) ,
t > 0,
(2.5.2)
where μ = (λN + λN+1 )/2. As in Theorem 2.4.9 equation (2.5.2) implies exponential master–slave synchronization in the form QN u(t) − m(PN u(t))α < Ce−μ t (1 + u0 α ) , for every solution u to problem (2.5.1).
t > 0,
Part II
Stochastic Systems
Chapter 3
Stochastic Synchronization of Random Pullback Attractors
In this and in the next chapter we deal with the synchronization of random dynamical systems (RDS). The concept of RDS (see Arnold [4] and the literature cited therein) covers the most important families of dynamical systems with randomness, including random and stochastic ordinary and partial differential equations and random difference equations. This concept makes it possible to study randomness in the framework of classical dynamical systems theory with all its powerful machinery. Randomness could describe environmental or parametric perturbations, internal fluctuations, measurement errors, or just a lack of knowledge. The theory of RDS has been developed intensively in recent decades and now provides an important tool for the study of qualitative properties of stochastic differential equations. This chapter is a random analog of Chap. 1. One of the goals here is to detect how the presence of the noise can impact on the synchronization of infinite-dimensional RDS at the level of global random attractors. On the one hand, it has been shown to persist in the presence of the environmental noise provided, appropriate concepts of random attractors and stochastic stationary solutions are used instead of their deterministic counterparts (see Caraballo/Kloeden [19]). On the other hand, in the case of parabolic systems, we can use the monotonicity method to show that the presence of particular additive noise can lead to a strengthening of the synchronization. This means we have synchronization at the level of trajectories rather than attractors, which does not occur in the absence of noise. We start with some basic properties of random processes.
© Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3 3
181
182
3 Stochastic Synchronization of Random Pullback Attractors
3.1 Basic Stochastics In this section we collect some preliminary material on the dynamics of stochastic perturbations of finite- or infinite-dimensional systems. Let (Ω , F , P) be a probability space with expectation E and let (E, E ) be a measurable space. A measurable mapping X : (Ω , F , P) → (E, E ) is called a random variable. In the following we will assume that E is a Polish space and E is its Borel-σ -algebra B(E). Let T be a nonempty subset of R. We only consider the case that T is R or R+ or subintervals of these sets. A family of random variables X = X(t) in (E, E ) for t ∈ T is called a random process. A random variable is a random process if T consists of a single element. The mappings T t → X(t, ω ),
ω ∈Ω
are called the paths (or trajectories) of the random process X. In a similar way, we can define a random field: let Y be some nonempty set. A family of random variables X = X(y) for y ∈ Y is called a random field. A random process is a special random field. Let X, X be two random processes with values in (E, E ). We say X is a version of X or X is a version of X if P({ω ∈ Ω : X(t, ω ) = X (t, ω )}) := P(X(t) = X (t)) = 1 for all t ∈ T. These random processes are called indistinguishable if P({ω ∈ Ω : X(t, ω ) = X (t, ω ) for all t ∈ T}) =: P(X(t) = X (t) for all t ∈ T) = 1.
provided the set inside the probability is measurable. Let X, X be versions with right (left) continuous paths; then, they are indistinguishable. A random process is called measurable, continuous, locally H¨older-continuous if the paths of X are measurable, continuous, locally H¨older-continuous. A function defined (say) on R is called locally H¨older continuous if for every T ∈ R there exists a neighborhood of T such that the function f : R → X has a finite H¨older norm on this neighborhood. Then we can conclude that the H¨older norm of any compact interval is bounded. In particular, we define the β H¨older norm (β ∈ (0, 1)) for an interval [T1 , T2 ] as follows: f β ,T1 ,T2 := f ∞,T1 ,T2 + ||| f |||β ,T1 ,T2 = f ∞ + ||| f |||β , ||| f |||β ,T1 ,T2 := ||| f |||β := and f ∞,T1 ,T2 denotes the supremum norm.
f (p) − f (q) |q − p|β p 0 there is a constant Cε (ω ): |X(θt ω )| ≤ Cε (ω )eε |t|
for t ∈ R.
Proof. There exists a constant T = T (ε , ω ) > 0 such that for |t| ≥ T log |X(θt ω )| ≤ ε |t| On the other hand, cε (ω ) := sup log |X(θt ω )| < ∞. t∈[−T,T ]
Taking now the exponential of log |X(θt ω )| we have the conclusion with Cε (ω ) = exp(cε (ω )). Remark 3.1.8. Let X be a random variable. Suppose that Y (ω ) = supr∈[0,1] |X (θr ω )| is a random variable. Then, the random variable X is tempered if and only if Y is tempered.
3.1 Basic Stochastics
187
The main example for our purposes for an MDS is the Brownian motion (C0 (R, E), B(C0 (R, E)), P, θ ) where the probability space (C0 (R, E), B(C0 (R, E)), P) has been introduced in the last section. θ is the Wiener shift:
θt ω (·) = ω (· + t) − ω (t),
ω ∈ C0 (R, E) = Ω .
(3.1.3)
Note that θ is a measurable flow (see Arnold [4] Appendix). The stationarity of P follows from the fact that a Brownian motion has stationary increments. This MDS is ergodic (see Boxler [16]). Let (Ft )t∈R be a family of sub σ algebras of F such that Ft1 ⊂ Ft2 for t1 < t2 . This family is called a filtration. A random process t → Y (t) is called adapted if for all t random variables Y (t) is Ft measurable. An MDS θ is called filtered with respect to a filtration (Ft )t∈R if
θu−1 Ft = θ−u Ft = Ft+u
for t, u ∈ R.
For the Brownian motion we can define Ft = σ {ω (u) : u ≤ t},
(3.1.4)
then the MDS of a Brownian motion defines a filtered MDS (see Arnold [4, p. 546]). Lemma 3.1.9. Let ω be the canonical Brownian motion introduced above. Then the random variable X(ω ) = sup ω (t)E t∈[0,1]
is sublinear growing: lim
t→±∞
X(θt ω ) =0 |t|
on a θ invariant set of full measure. Proof. Similar to the condition in (3.1.2) ensuring the temperedness, we have to show that E sup X(θτ ω ) < ∞ τ ∈[0,1]
(see Arnold [4, p. 165]). But sup X(θτ ω ) ≤ 2 sup ω (s)E ,
τ ∈[0,1]
s∈[0,2]
188
3 Stochastic Synchronization of Random Pullback Attractors
where the right-hand side has a finite expectation by Kunita [101, Theorem 1.4.1]. This statement remains true if we replace the interval [0, 1] by another compact interval. Lemma 3.1.10. Let ω be the canonical Brownian motion. The random process Y (t, ω ) = sup ω (s)E s∈[0,t]
is subexponentially growing for t → ±∞ on a θ invariant set of full measure. Proof. We consider only the case t → +∞. The other case works similarly. We have
ω (t) = ω (t) − ω (t ) + ω (t ) − ω (t − 1 ) + ω (t − 1 ) ∓ · · · − ω (0) and hence t
sup ω (s)E ≤ ∑ sup θi ω (τ )E .
s∈[0,t]
i=0 τ ∈[0,1]
· is the truncation operator. According to Lemma 3.1.9 the terms under the sum are sublinearly growing and hence for 0 < ε < 1 these terms are bounded by ε i for i > T (ω , ε ) such that Y has an asymptotic growth bounded by |t|2 . Now we generalize the term dynamical system introduced in Chap. 1 if a noise influences the dynamics. The noise is given by an MDS (Ω , F , P, θ ). For our applications it will be sufficient to assume that the state space X is Polish. Definition 3.1.11. Let X be a Polish space. A mapping
φ : R+ × Ω × X → X, which is B(R+ ) ⊗ F ⊗ B(X), B(X) measurable, is called a random dynamical system (RDS) if the cocycle property holds:
φ (t, θτ ω , ·) ◦ φ (τ , ω , ·) = φ (t + τ , ω , ·) for all t, τ ∈ R+ , ω ∈ Ω , φ (0, ω , ·) = idΩ for all ω ∈ Ω . If the mapping x → φ (t, ω , x) is continuous for all ω ∈ Ω and t ≥ 0, then we call the RDS continuous. The cocycle property generalizes the semigroup property, which is easily seen when we delete all ω ’s in the above formula. In probability theory we can often find the phrase almost surely. We emphasize that this phrase is not allowed in the definition, giving us an RDS when the exceptional sets related to almost surely depend on the parameters x, t, τ . If the above relation holds only almost surely where the exceptional set for the cocycle relation depends on τ , then we call this object a crude
3.1 Basic Stochastics
189
RDS. Here, we do not deal with crude RDS. Suppose now that the cocycle property holds only for ω ∈ Ω ∈ F , where Ω has full measure and is θ invariant. Then we can construct a new MDS (Ω , F ∩ Ω , P(· ∩ Ω )). Note that θ restricted to R × Ω is still measurable with respect to the new σ algebra B(R) ⊗ F ∩ (R × Ω ) (see also the more general situation in Remark 3.1.6). If P is θ ergodic, this is true for the restrictions. Often, we consider this restriction automatically without comment and use for the old symbol for the new MDS.
3.1.2 Random Attractors The term attractor plays an important rˆole in the theory of dynamical systems. Here, we give an appropriate definition of an attractor when the system is influenced by random excitations, i.e., the system is an RDS. For basic results about this object we refer to Crauel/Flandoli [63], Flandoli/Schmalfuss [75], Schmalfuss [144], [145] or Gess [81]. A random attractor will be a random set. We start to mention some basic properties of random sets. Recall that (X, dX ) is a Polish space. Definition 3.1.12. Consider a multifunction D : Ω ω → D(ω ) ∈ P \ {0} / where P describes the power set of X. Suppose that {ω ∈ Ω : O ∩ D(ω )} ∈ F for all open sets O ⊂ X then D is called a random set. This random set is called closed if all image elements D(ω ) are closed sets. For a multifunction D with closed and nonempty image elements we have the following criterion to decide if D is a random set. Lemma 3.1.13. A multifunction D with closed image elements is a closed random set if and only if
ω → inf dX (y, x) =: dX (y, D(ω )) x∈D(ω )
is a random variable for all y ∈ X. The following theorem can be found in Castaing/Valadier [26, Chapter III]. Theorem 3.1.14. The following properties are equivalent: (i) D is a closed random set. (ii) There exists a sequence of random variables (Zn )n∈N such that D(ω ) =
X
{Zn (ω )} .
n∈N
190
3 Stochastic Synchronization of Random Pullback Attractors
In particular, there then exists a random variable Z in X such that Z(ω ) ∈ D(ω ) for all ω ∈ Ω . A random attractor will attract particular random sets. The collection of these sets is called a universe of attraction. We refer to Flandoli/Schmalfuss [75] and Schmalfuss [147]. Here, we only consider one particular universe consisting of the tempered closed random sets. A closed random set D is called tempered if the random variable
ω → sup dX (x, 0) x∈D(ω )
is a tempered random variable. The measurability of the supremum follows by Theorem 3.1.14 (ii). In particular, we apply this definition when X is a separable Banach space such that dX (x, 0) = x. We will denote the system of tempered closed random sets by D. Now we are in a position to define the term random attractor. Definition 3.1.15. Let D be the universe of tempered closed random sets. An element A ∈ D is called random attractor for the RDS φ if A(ω ) is compact. In addition, A has the property:
φ (t, ω , A(ω )) = A(θt ω ),
for all t ≥ 0 and ω ∈ Ω ,
(3.1.5)
and A is attracting with respect to the convergence in probability: X
lim P({ω ∈ Ω : dX (φ (t, ω , D(ω )) , A(θt ω )) > ε }) = 0 for all ε > 0
t→∞
where dX (A, B) := sup inf dX (a, b) = sup dX (a, B) a∈A b∈B
a∈A
which is nothing but the Hausdorff seminorm, see Ochs [124]. The usual way to prove the existence of a random attractor is to show the existence of a random pullback attractor. Pullback attractors are defined for more general systems than RDS. But here we restrict ourselves to this special kind of a nonautonomous dynamical system. Let φ be a cocycle representing this nonautonomous dynamical system with respect to the system of nonautonomous perturbations (Ω , θ ). A multifunction A = (A(ω ))ω ∈Ω such that A(ω ) is compact and nonempty is called a pullback attractor of the nonautonomous dynamical system φ if we have (3.1.5) and the pullback convergence property lim dX (φ (t, θ−t ω , D(θ−t ω )), A(ω )) = 0
t→∞
for all ω ∈ Ω , where D is an element of a given nonautonomous universe D, see Schmalfuss [147].
3.1 Basic Stochastics
191
An MDS (Ω , F , P, θ ) is a nonautonomous system of perturbations with a measurable and stationary structure. Now, it is easily seen that for D the system of random tempered sets we have X
P lim dX (φ (t, θ−t ω , D(θ−t ω )), A(ω )) = P lim dX (φ (t, θ−t ω , D(θ−t ω )) , A(ω )) t→∞
t→∞
X
= P lim dX (φ (t, ω , D(ω )) , A(θt ω )) = P lim dX (φ (t, ω , D(ω )), A(θt ω )) = 0 t→∞
t→∞
by the θ invariance of P. We consider here the closure of φ (t, ω , D(ω )) to have a closed random set, which follows by Theorem 3.1.14 (ii). In addition we have X
dX (φ (t, ω , D(ω )) , A(θt ω )) = dX (φ (t, ω , D(ω )), A(θt ω )) and the right-hand side term is measurable, deleting the ω ’s in the above definition and replacing the RDS φ by an (autonomous) dynamical system. Then, the definition of a pullback attractor applied to autonomous dynamical systems is the global attractor. For D we can choose the universe of attraction of ω independent bounded sets. In the following, we give criteria for the existence of random attractors. We consider here only the universe D consisting of tempered closed random sets (see above). We need the following definitions: Definition 3.1.16. let B ∈ D. We call B compact if B(ω ) = 0/ is compact for ω ∈ Ω . B is called forward (positively) invariant for the RDS φ if
φ (t, ω , B(ω )) ⊂ B(θt ω ). If we have
φ (t, ω , B(ω )) = B(θt ω ). then B is called invariant for φ . Suppose that we have a B ∈ D such that for all D ∈ D we have lim dX (φ (t, θ−t ω , D(θ−t ω )), B(ω )) = 0,
t→∞
for all ω ∈ Ω .
Then, B is called pullback attracting for the RDS φ . If for all ω ∈ Ω and D ∈ D there exists a tD (ω ) such that
φ (t, θ−t ω , D(θ−t ω )) ⊂ B(ω ) for all t ≥ tD (ω ) then B is called pullback absorbing for the RDS φ . A random pullback attractor is a compact invariant pullback attracting set in D for the RDS φ . Straightforwardly, if B is pullback absorbing, then B is pullback attracting.
192
3 Stochastic Synchronization of Random Pullback Attractors
Now we are in a position to formulate conditions for the existence of a random attractor. For the following theorem, we refer to Keller/Schmalfuss [99]. Theorem 3.1.17. Let D be the universe of tempered closed random sets and suppose that the continuous RDS φ has a compact pullback attracting set C in D. Then, φ has a unique pullback attractor A in D, which is a random attractor. A is given by A(ω ) =
X
φ (t, θ−t ω ,C(θ−t ω )) .
s>0 t≥s
Proof. 1. Consider the pullback omega limit set of D ∈ D:
ΓD (ω ) :=
τ ≥0 t≥τ
X
φ (t, θ−t ω , D(θ−t ω )) .
Before we prove the existence of a random pullback attractor we show that ΓD (ω ) is nonempty and compact, and that ΓD (ω ) ⊂ C(ω ) holds for all D ∈ D. First of all, we show that ΓD (ω ) is nonempty. Let sequences tn → ∞ and bn ∈ D(θ−tn ω ) be given. Thus, by the attracting property, there exists a sequence cn ∈ C(ω ) such that lim dX (φ (tn , θ−tn ω , bn ), cn ) = 0.
tn →∞
(3.1.6)
Since C(ω ) is by assumption compact, there exists a converging subsequence cnk → c ∈ C(ω ). Hence, φ (tnk , θ−tnk ω , bnk ) also converges to c ∈ ΓD (ω ). Now we show that ΓD (ω ) ⊂ C(ω ). Since ΓD (ω ) is nonempty, there exist a y ∈ ΓD (ω ) and a sequence tn → ∞ and bn ∈ D(θ−tn ω ) such that y = lim φ (tn , θ−tn ω , bn ). n→∞
Then, by the pullback attracting property and the compactness of C(ω ), it follows that y ∈ C(ω ). Consequently, we have proved ΓD (ω ) ⊂ C(ω ). Since ΓD (ω ) ⊂ C(ω ) is a subset of a compact set and it is closed because it is an intersection of closed sets, ΓD (ω ) is compact itself. 2. In this part we prove that A(ω ) := ΓC (ω ) is a pullback attractor of φ . Being an omega-limit set, A(ω ) is nonempty, compact, and A(ω ) ⊂ C(ω ). The last inclusion implies in particular that A ∈ D. To prove the invariance of A(ω ), we first show A(θτ ω ) ⊂ φ (τ , ω , A(ω )) for a fixed τ > 0. For this purpose, we take y ∈ A(θτ ω ). Set ω˜ = θτ ω . Since A(ω ) is an omega-limit set, there exist sequences tn → ∞ with tn > τ and xn ∈ C(θ−tn ω˜ ) with limtn →∞ φ (tn , θ−tn ω˜ , xn ) = y. Setting t˜n = tn − τ , we obtain by the cocycle property yn := φ (tn , θ−tn ω˜ , xn ) = φ (τ , ω , φ (t˜n , θ−t˜n ω , xn )),
3.1 Basic Stochastics
193
where xn ∈ C(θ−t˜n ω ). Denoting zn := φ (t˜n , θ−t˜n ω , xn ) we know that yn = φ (τ , ω , zn ). The attracting property and equation (3.1.6) imply the existence of a subsequence of (zn ) converging to a c ∈ C(ω ). We also conclude that c ∈ ΓC (ω ) = A(ω ). Using the continuity of the cocycle φ and the cocycle property, we deduce y = φ (τ , ω , c); hence, we obtain A(θτ ω ) ⊂ φ (τ , ω , A(ω )).
(3.1.7)
Note that this inclusion holds for every omega-limit set. We now prove the forward invariance, i.e., φ (τ , ω , A(ω )) ⊂ A(θτ ω ) for a fixed τ > 0. For this purpose we take a y ∈ φ (τ , ω , A(ω )) and a sequence yn ∈ φ (τ , ω , A(ω )) with limn→∞ yn = y. Since A(ω ) = A(θtn θ−tn ω ) for all n ≥ 0, we obtain using equation (3.1.7) and setting ω˜ = θτ ω and t˜n = tn + τ yn ∈ φ (τ , ω , A(ω )) ⊂ φ (τ , ω , φ (tn , θ−tn ω , A(θ−tn ω ))) = φ (t˜n , θ−t˜n ω˜ , A(θ−t˜n ω˜ )))
for all n ≥ 0. Since A(θ−t˜n ω˜ ) ⊂ C(θ−t˜n ω˜ ) there exists a sequence xn ∈ C(θ−t˜n ω˜ ) such that yn = φ (t˜n , θ−t˜n ω˜ , xn ). Similar to the calculations above we have that y ∈ ΓC (ω˜ ) = A(θτ ω ). This completes the proof of the invariance of the set A(ω ). The measurability of the attractor follows from the measurability of the omegalimit set of C. The latter one can be proved similar as in Remark 3.6 in Flandoli/Schmalfuss [75]. 3. Before we prove the attraction property, we first show ΓD (ω ) ⊂ A(ω ) for all D ∈ D. The invariance of the omega-limit sets and ΓD (ω ) ⊂ C(ω ) imply that
ΓD (ω ) = φ (τ , θ−τ ω , ΓD (θ−τ ω )) ⊂ φ (τ , θ−τ ω , C(θ−τ ω )) for all τ ≥ 0. Therefore, we have
ΓD (ω ) ⊂
φ (τ , θ−τ ω , C(θ−τ ω )) ⊂ ΓC (ω ) = A(ω ).
τ ≥0
We now prove the attraction property by contradiction. We suppose that there exists a random set D ∈ D such that for some ε > 0 lim sup dX (φ (t, θ−t ω , D(θ−t ω )), A(ω )) ≥ 2 ε . t→∞
Then there exist sequences tn → ∞ and xn ∈ D(θ−tn ω ) such that dX (φ (tn , θ−tn ω , xn ), A(ω )) ≥ ε .
(3.1.8)
Using equation (3.1.6), there exists a converging subsequence of yn = φ (tn , θ−tn ω , xn ) (again denoted by yn ) such that lim yn = c ∈ C(ω )
tn →∞
194
3 Stochastic Synchronization of Random Pullback Attractors
and c ∈ ΓD (ω ). This contradicts equation (3.1.8), because we have already shown that ΓD (ω ) ⊂ A(ω ). Thus, the attraction property of A(ω ) is proved. 4. To complete the proof we show the uniqueness of the attractor. Assume that there exist two attractors A1 and A2 . Since attractors are invariant and A1 ∈ D the attracting property implies dX (A1 (ω ), A2 (ω )) = lim dX (φ (t, θ−t ω , A1 (θ−t ω )), A2 (ω )) = 0. t→∞
Therefore, A1 (ω ) ⊂ A2 (ω ) for all ω ∈ Ω . The converse inclusion follows in a similar manner. Corollary 3.1.18. Suppose that we have the assumptions of the last theorem if B is a compact pullback absorbing set for D. Then, φ has a unique random pullback attractor A in D, which is a random attractor. We now describe the relation between two RDS φ and ψ . Consider a mapping T : Ω ×X → X which is F ⊗ B(X), B(X) measurable. Suppose that the partial mapping x → T (ω , x) is a bijection on X. For ω fixed let T −1 (ω , x) be the inverse mapping. Assume that (ω , x) → T −1 (ω , x) is F ⊗ B(X), B(X) measurable too. Definition 3.1.19. Given two RDS φ , ψ with state space X over the same MDS, φ , ψ are called conjugated if there exists a T given as above such that
ψ (t, ω , x) = T (θt ω , ·) ◦ φ (t, ω , ·) ◦ T −1 (ω , x)
(3.1.9)
for all t ≥ 0, ω ∈ Ω and x ∈ X. In particular, given an RDS φ and a mapping T with the above properties, we can define by (3.1.9) a new RDS ψ . If T, T −1 are Carath´eodory maps in X and φ is continuous, so is ψ . Now, we deal with pullback attractors and random attractors for conjugated RDS. Theorem 3.1.20. Suppose that the mappings T , T −1 defined above are Carath´eodory maps and let φ be an RDS with pullback attractor A. Suppose that for all D ∈ D we have that (T (ω , D(ω )))ω ∈Ω ,
(T −1 (ω , D(ω )))ω ∈Ω ∈ D.
3.1 Basic Stochastics
195
Then, A (ω ) = T (ω , A(ω )) is a pullback attractor and hence a random attractor for ψ given in (3.1.9). Proof. Straightforwardly, A is in D and invariant for ψ and compact. Suppose that lim dX (ψ (t, θ−t ω , D(θ−t ω )), A (ω )) = 0
t→∞
for some D ∈ D, ω ∈ Ω . Then there exist a sequence (tn )n∈N of positive numbers tending to infinite and a sequence ( fn )n∈N , where fn ∈ ψ (tn , θ−tn ω , D(θ−tn ω )) such that lim sup dX ( fn , A (ω )) > 0. n→∞
In particular, we can assume that we have a subsequence also denoted by ( fn ) such that lim dX ( fn , A (ω )) ∈ R+ \ {0} ∪ {+∞}.
n→∞
Define for an ω ∈ Ω fn := T −1 (ω , fn ) ∈ φ (tn , θ−tn ω , T −1 (θ−tn ω , D(θ−tn ω ))) and an by dX ( fn , an ) = min dX ( fn , b), b∈A(ω )
which exists by the compactness of A(ω ). By selecting a subsequence (an ) again, we can assume that this subsequence converges to a ∈ A(ω ) and hence lim dX ( fn , a) = 0.
n →∞
Then, by continuity of T (ω , ·), lim dX (T (ω , fn ), T (ω , a)) = lim dX ( fn , a ) = 0,
n →∞
n →∞
a := T (ω , a) ∈ A (ω ),
which is a contradiction. Definition 3.1.21. A random variable Z ∈ X is called stationary point or random fixed point for the RDS φ if
φ (t, ω , Z(ω )) = Z(θt ω ) for t ≥ 0, ω ∈ Ω .
196
3 Stochastic Synchronization of Random Pullback Attractors
Often, these stationary points present a special form of random attractors. Indeed, {Z} can be considered a singleton invariant random set. RDS are generated by the solution of random/stochastic differential equations with solution v(t, ω ). Then, the random process (t, ω ) → Z(θt ω ) = v(t, ω ) represents a stationary solution of the differential equation with initial state v(0, ω ) = Z(ω ). Example 3.1.22. Consider the one-dimensional random differential equation dv + a(θt ω )v = f (θt ω ), dt
v(0) = v0 ∈ R
(3.1.10)
where a and f are random variables and θ forms together with a probability space an ergodic MDS. Then, the solution v generates an RDS. In particular, we have Theorem 3.1.23. Suppose that Ea > 0 and f is tempered. Then, the RDS φ generated by v has a tempered and pullback (and forward) exponentially attracting stationary point Z. If f is positive, then so is Z. Proof. Consider Z(ω ) =
0 −∞
e−
0 r
a(θq ω )dq
f (θr ω )dr,
which exists by the ergodic theorem and the fact that f is tempered. Indeed, there exists a θ invariant set Ω¯ ∈ F of full measure such that −
0 r
a(θq ω )dq ∼ rEa,
log+ | f (θr ω )| ≤
Ea |r| 2
if −r > 0 is chosen sufficiently large. Hence, Z(ω ) exists for ω ∈ Ω¯ . Note that Z is tempered. To see this we consider for a positive δ and t < 0 e−δ |t| |Z(θt ω )| =eδ t |Z(θt ω )| ≤
0
eδ t exp Ear −
0
(a(θq ω ) − Ea)dq + −∞ r+t − Ea)dq + log+ | f (θr+t ω )| dr.
0 t
(a(θq ω )
Again, by the ergodic theorem and the property of a tempered random variable we have the following bound of the last expression: for t ≤ T (ω , ε0 ) < 0 and ε < ε0 we have eδ t
0
−∞
exp(Ear + 3ε |t + r|)dr.
If now 3ε0 < δ and 3ε0 < Ea then this expression converges to zero. By the variation of constants formula it is easy to check that t → Z(θt ω )
3.1 Basic Stochastics
197
solves on Ω¯ (3.1.10) with v0 = Z(ω ). Let v(t, ω ) be the solution of (3.1.10) with initial condition v0 . Then we have |Z(θt ω ) − v(t, ω )| ≤ |Z(ω ) − v0 |e−
t
0 a(θr ω )dr
where the right-hand side tends exponentially fast to zero. Replacing ω by θ−t ω we have |Z(ω ) − v(t, θ−t ω )| ≤ |Z(θ−t ω ) − v0 |e−
t
0 a(θr−t ω )dr
= |Z(θ−t ω ) − v0 |e−
0
−t a(θr ω )dr
,
which converges to zero for t → ∞ exponentially fast by the temperedness of |Z|. Remark 3.1.24. Assume that v0 is a tempered random variable, then Z is still pullback attracting. The same is true when we consider v0 ∈ D(θ−t ω ), where D is a tempered random set: lim
sup
t→∞ v ∈D(θ ω ) −t 0
|Z(ω ) − v(t, θ−t ω , v0 )| = 0
with exponential speed. This property of a one-dimensional RDS will be frequently used in the following of this chapter. Finally, we give conditions on the upper semicontinuity of attractors on parameters. Theorem 3.1.25. Let (Λ , dΛ ) be a complete metric space. For every λ ∈ Λ there exists a continuous RDS φ λ with the same state Polish state space X. Suppose that there exists a random set B ∈ D that is compact and pullback absorbing for all φ λ , λ ∈ Λ such that every φ λ has the random pullback attractor Aλ . Suppose that for yn → y0 and λn → λ0 we have a subsequence n¯ such that lim φ λn¯ (t, ω , yn¯ ) = φ λ0 (t, ω , y0 )
n→∞ ¯
for all t ∈ R+ , ω ∈ Ω . Then, lim dX (Aλ (ω ), Aλ0 (ω )) = 0 for all ω ∈ Ω .
λ →λ0
(3.1.11)
Proof. By the construction of a pullback attractor in Theorem 3.1.17 we have Aλ (ω ) ⊂ B(ω ). Suppose that we do not have the convergence of (3.1.11). Then there exist ε > 0, a sequence (λn , xn )
with
lim dΛ (λn , λ0 ) = 0,
n→∞
xn ∈ Aλn (ω )
198
3 Stochastic Synchronization of Random Pullback Attractors
such that dX (xn , Aλ0 (ω )) > ε . We choose a sufficiently large t:
ε dX (φ λ0 (t, θ−t ω , B(θ−t ω )), Aλ0 (ω )) < . 2 By Aλn (ω ) = φ λn (t, θ−t ω , Aλn (θ−t ω )) ⊂ φ λn (t, θ−t ω , B(θ−t ω )) we have elements xtn ∈ B(θ−t ω ):
φ λn (t, θ−t ω , xtn ) = xn . Hence, by the compactness of B(θ−t ω ), we have a subsequence (n ) and an element xt0 ∈ B(θ−t ω ) with lim xtn = xt0 n→∞
and thus by the assumptions for another subsequence (n) ¯ ⊂ (n ) we obtain lim φ λn¯ (t, θ−t ω , xtn¯ ) = φ λ0 (t, θ−t ω , xt0 )
n →∞
and hence
ε dX (φ λ0 (t, θ−t ω , xt0 ), Aλ0 (ω )) > . 2 This is a contradiction.
3.1.3 Stochastic Convolution In this section we are going to find stationary solutions of the stochastic evolution equation dZ + AZdt = Bd ω ,
Z(0) = x ∈ X,
(3.1.12)
where X is a separable Hilbert space. ω is a Brownian motion in a separable Hilbert space with covariance K of finite trace. Suppose that A is the generator of a strongly continuous semigroup S. Then we have constants M ≥ 1 and λ such that S(t) = S(t)L(X) ≤ Meλ t
for
t ≥ 0.
If λ can be chosen negative, we call S an exponentially stable semigroup. The meaning of a stationary solution to (3.1.12) is that there is a random variable Z ∈ X such that the random process (t, ω ) → Z(θt ω )
3.1 Basic Stochastics
199
solves (3.1.12). In other words, Z is a stationary point for the RDS generated by (3.1.12). Here, θ is the Wiener shift introduced in (3.1.3). In particular, we assume that ω is a canonical Brownian motion with a path in C0 (R, E), where E is some separable Hilbert space. Let B ∈ L(E, X). We also know that Bω is a Brownian motion in X with covariance BKB∗ , which has finite trace. Similar to Da Prato/Zabczyk [65, Chapter 5] let us assume for simplicity that E ⊂ X and B is the embedding operator, which is assumed to be continuous. We do not write this operator. Lemma 3.1.26. Let A be the generator of a strongly continuous semigroup S. Let ω be a canonical Brownian motion in E = D(A) ⊂ X with covariance of finite trace with respect to E. Then, the stochastic integral t 0
S(t − r)d ω (r)
(3.1.13)
has a continuous version in X given by
ω (t) − A
t 0
S(t − r)ω (r)dr.
(3.1.14)
Proof. Equation (3.1.14) can be motivated by a finite-dimensional integration by parts formula for stochastic integrals (see Øksendal [125, Theorem 4.1.5]). We can replace the strongly continuous semigroup S by the uniformly continuous semigroup Sε of the Yoshida approximation Aε of A and apply for this semigroup the integration by parts formula. Later, we consider the limit for ε → 0 (see Da Prato and Zabczyk [65, Lemma 5.13]). We use the same argument as in Pazy [127, Section 4.2] and the fact that t
A 0
S(t − s)ω (s)ds =
t 0
S(t − s)Aω (s)ds,
which is well defined because ω is continuous in D(A). Now we are in a position to introduce a stationary solution to (3.1.12). Theorem 3.1.27. In addition to the assumption of Lemma 3.1.26, we suppose that S is exponentially stable. Then there exists a random variable Z in X such that the continuous random process (t, ω ) → Z(θt ω ) ∈ X solves (3.1.12). The mapping t → Z(θt ω ) is continuous. The random variable Z is tempered. The random variable Z is sometimes called a stationary Ornstein–Uhlenbeck process.
200
3 Stochastic Synchronization of Random Pullback Attractors
Proof. We define Z(ω ) by −A
0 −∞
S(−r)ω (r)dr
(3.1.15)
for the full set Ω of all ω such that there exists a C λ (ω ) 2
λ
Aω (t) ≤ C λ (ω )e 2 |t| 2
for all t ∈ R
(see Lemma 3.1.9). For ω outside this θ invariant set, we set Z(ω ) ≡ 0. Now, we have on Ω and for t ≥ 0 S(t)Z(ω ) + ω (t) − =− =−
t −∞ 0 −∞
t 0
S(t − r)Aω (r)dr
S(t − r)Aω (r)dr + ω (t) = − S(−r)Aθt ω (r)dr −
0 −∞
0 −∞
S(−r)Aω (r + t)dr + ω (t)
S(−r)Aω (t)dr + ω (t) = Z(θt ω ).
Indeed, by Pazy [127, (I.2.4)] 0 −∞
S(−r)Aω (t)dr = lim
0
s→∞ −s
S(−r)Aω (t)dr = lim
s
= lim A s→∞
0
s
s→∞ 0
S(s − r)Aω (t)dr
S(s − r)ω (t)dr = − lim S(s)ω (t) + ω (t) = ω (t). s→∞
Now, the continuity of t → Z(θt ω ) follows easily by the continuity of t → Aθt ω (r) for r ∈ R− and the existence of a majorant with respect to a sequence tn → t mt (r, ω ) := 2Meλ r sup (Aω (r + t + τ ) + Aω (t + τ )), τ ∈[−ε ,ε ]
r≤0
for S(−r)Aθt ω (r), see Lemma 3.1.10. The temperedness follow similarly, excluding an infinite exponential growth rate. Since Z(θt ω ) = ω (t) − A
t −∞
S(t − r)ω (r)dr
we see that Z(θt ω ) is Ft measurable where these σ algebras have been introduced in (3.1.4). Hence, (t, ω ) → Z(θt ω ) is a (Ft )t∈R adapted process. Sufficient for the existence of Z is that ω (t) ∈ D(A). In particular, when A generates the wave semigroup some problems appear. The assumption that we have a
3.1 Basic Stochastics
201
Brownian motion in D(A) is sometimes very strong for the definition of a stationary process Z(θt ω ). In addition, it is possible to define an Ito integral t s
S(t − r)d ω (r)
for −∞ < s < t and ω a two-sided trace class Brownian motion on some separable Hilbert space X. For the existence of this stochastic integral, we refer the reader to Da Prato/Zabczyk [65, Theorem 5.2]. However, Ito integrals are only defined almost surely where the exceptional set may depend on the time parameters. Let S be a strongly continuous exponentially stable semigroup on the separable Hilbert space X and ω a two-sided trace class Brownian motion on with values in X such that for the covariance K of the Brownian motion we have trX K < ∞. Then the random process in X Yn (t, ω ) =
t
−n
S(t − r)d ω (r),
t ≥ −n ∈ Z+
is well defined on the probability space (Ω , F , P). In particular, by Da Prato/Zabczyk [65, Theorem 5.9 and Remark 5.10] we can consider a continuous version for Yn . It also makes sense to define in the L2 sense the following integral t
lim
n→∞ −n
S(t − r)d ω (r) = Y (t, ω ) :=
t
−∞
S(t − r)d ω (r).
For those ω s for which this limit expression is not well defined, we set Y (t, ω ) ≡ 0 This random process has a continuous version in X. We can consider a continuous version of that process. Indeed S(t + n)
−n −∞
S(−n − r)d ω +Yn (t, ω )
is a continuous process on (−n, ∞). Since this is true for every n ∈ Z, we obtain a continuous version of Y . In addition, for every t ∈ R we have Y (t, ω ) =
0 −∞
S(t − (r + t))d ω (r + t) = Y (0, θt ω )
almost surely.
(3.1.16)
The following theorem shows that for any continuous Y satisfying (3.1.16), there exists a random variable Z such that t → Y (t, ω ) and t → Z(θt ω ) are indistinguishable. The random process (t, ω ) → Z(θt ω ) is called a perfect version of Y . This theorem can be found in Lederer [106, Section 2.3.2]. Theorem 3.1.28. Let Y be a continuous process on R with values in a Polish space H defined on a metric dynamical system (Ω , F , P, θ ). We assume that Y (t, ω ) = Y (0, θt ω ) almost surely for any t ∈ R. Then there exists a random process Zˆ in H
202
3 Stochastic Synchronization of Random Pullback Attractors
defined by a random variable Z such that ˆ ω ) = Z(θt ω ), Z(t,
for all t ∈ R,
ω ∈Ω
(3.1.17)
and ˆ = Y (·)) = 1. P(Z(·) In particular, Z and Zˆ are indistinguishable. Proof. 1. Let ν be a probability measure on B(R) equivalent to the Lebesgue measure. We define the following sets:
Ω0 ={ω ∈ Ω : there exists an N0 ∈ B(R) with ν (N0 ) = 0, Y (t, ω ) = Y (0, θt ω ) for t ∈ R \ N0 }, Ω1 ={ω ∈ Ω : there exists an N1 ∈ B(R) with ν (N1 ) = 0, θτ ω ∈ Ω0 for τ ∈ R \ N1 }. We have that Ω0 ∈ F with full measure. By the regularity of the paths of Y we obtain A = {(t, ω ) ∈ R × Ω : Y (t, ω ) = Y (0, θt ω )} ∈ B(R) ⊗ F (see Lemma 3.1.2). Now we have for the cross sections Aω = {t : (t, ω ) ∈ A} ∈ B(R) that Eλ (Aω ) =
R Ω
1A (t, ω )dPd λ =
R
P(Y (t, ω ) = Y (0, θt ω ))d λ = 0
and hence λ (Aω ) = ν (Aω ) = 0 almost surely such that P(Ω0 ) = 1. Consider now B = {(t, ω ) ∈ R × Ω : θt ω ∈ Ω0 } ∈ B(R) ⊗ F . To see that P(Ω1 ) = 1, we note for every fixed t P(θt ω ∈ Ω0 ) = P(Y (s, θt ω ) = Y (0, θs θt ω ) for ν − almost every s ∈ R) = 1. (3.1.18) Thus, we obtain Eλ (Bω ) =
R Ω
1B (t, ω )dPd λ =
R
P(θt ω ∈ Ω0 )d λ = 0.
The conclusion follows similar to the last part of the proof.
3.1 Basic Stochastics
203
2. Choose an x0 ∈ H and set for an s ∈ R such that θs ω ∈ Ω0 Y (t − s, θs ω ) : if ω ∈ Ω1 , ˆ Z(t, ω ) := : if ω ∈ Ω1c . x0 We show that this definition is independent of s. Consider ω ∈ Ω1 and s1 , s2 ∈ R such that θsi ω ∈ Ω1 . Then there exist sets Bi ∈ B(R) of full ν measure such that for u ∈ Bi Y (u, θsi ω ) = Y (0, θu+si ω ),
u ∈ Bi .
Consider now any sequence (un )n∈N , un ∈ (B1 + s1 ) ∩ (B2 + s2 ) such that un → t. Then, by the continuity of Y , we can conclude Y (t − s1 , θs1 ω ) = lim Y (un − s1 , θs1 ω ) n→∞
= lim Y (0, θun ω ) = lim Y (un − s2 , θs2 ω ) = Y (t − s2 , θs2 ω ). n→∞
n→∞
We show ˆ ω ) = Z(0, ˆ θt ω ) Z(t, for t ∈ R and ω ∈ Ω . This is trivial for ω ∈ Ω1c . Let now ω ∈ Ω1 . Then, we have sets C1 , C2 of full ν measure such that ˆ ω ) =Y (t − s, θs ω ), Z(t, ˆ θt ω ) =Y (−r, θr+t ω ), Z(0,
s ∈ C1 r ∈ C2 .
Now choose τ ∈ C1 ∩ (C2 + t) such that ˆ ω ) =Y (t − τ , θτ ω ) = Y (−(τ − t), θ(τ −t)+t ω ) = Z(0, ˆ θt ω ). Z(t, 3. By the definition of Zˆ this random process has continuous paths. Zˆ is a version of Y that follows by the following chain of equalities, which holds for every t almost surely: ˆ ω ) = Y (t − s, θs ω ) = Y (0, θt−s+s ω ) = Y (t, ω ) Z(t, such that θs ω ∈ Ω0 . Since Zˆ and Y are continuous processes, we get the last ˆ ω ). statement of the assertions if define Z(ω ) by Z(0, Example 3.1.29. Let A be a positive symmetric operator in a separable Hilbert space H with compact inverse. Then, A has a positive discrete spectrum 0 < λ1 ≤ λ2 ≤ · · ·
204
3 Stochastic Synchronization of Random Pullback Attractors
of finite multiplicity tending to infinite. The related eigenelements form a complete orthonormal system. Let S be the semigroup on H generated by A. We consider a Brownian motion ω with trace class covariance K. Letting (ki j )i, j∈N be a representation of K is the complete orthonormal basis given by A. We assume trX (Aε K) = ∑ λiε kii < ∞ i
for some positive ε > 0. In particular, we can assume that Aε /2 ω is of trace class where ω has the covariance K. We note that according to Theorem 3.1.28 there exists a random variable Z ∈ D(A1/2 ) defining a stationary solution to (3.1.12). In the basis, we can represent S(t) by (e−λi t (ei ⊗ ei ))i∈N where ei is the eigenelement for λi . Let us consider the Gauß process Y (t, ω ) =
t −∞
S(t − r)d ω (r).
We have for s ≤ t E(Y (t),Y (s))D(A1/2 ) =
s −∞
trX (AS(t − r)KS(s − r))dr
= ∑ e−λi (t−s) i
0 −∞
λi e2λi r kii dr =
1 e−λi (t−s) kii . 2∑ i
and hence for ε ∈ (0, 1) 1 − e−τ τε τ >0
EY (t) −Y (s)2D(A1/2 ) = ∑(1 − e−λi (t−s) )kii ≤ (t − s)ε sup i
∑ λiε kii i
ε
≤c(t − s) . Since Y (t) − Y (s) is a Gauß random variable we can apply (3.1.1) and Theorem 3.1.1. Now we can apply Theorem 3.1.28 to obtain a random variable Z ∈ D(A1/2 ) such that the random process t → Z(θt ω ) provides a continuous version of Y. We can generalize the above consideration. Suppose that trX (A2α −1+ε K) < ∞ for some ε ∈ (0, 1), then Z has a continuous version in D(Aα ) = X α (see Chueshov and Scheutzow [51]). In particular, if K = id is the covariance of a cylindric Brownian motion, then Z has a continuous version in X α if trX A2α −1+ε < ∞ for α ≥ 0. The following theorem provides a similar regularity result as we considered in Example 3.1.29. These results are roughly based on analytical properties of semigroups. This is pointing in the direction of defining stochastic convolution for other noises, for instance, for convolutions driven by a fractional Brownian motion with Hurst parameter H = 1/2.
3.1 Basic Stochastics
205
Now, we deal with semigroups S generated by symmetric positive operator A with compact inverse. We can then assume that for β > 0 we have Aβ S(t) ≤
cβ , tβ
t > 0,
(3.1.19)
and S(t2 ) − S(t1 ) ≤ dβ (t2 − t1 )β Aβ S(t1 ),
0 ≤ t1 < t 2 ,
β ∈ (0, 1], (3.1.20)
say, for t ∈ (0, 1] (see Chueshov [34, Section 2.1]). This is fulfilled for the positive generator of a strongly continuous analytic semigroup. Theorem 3.1.30. Consider the MDS (C0 (R, X), B(C0 (R, X)), P, θ ) given by the canonical Brownian motion with covariance K of trace class in X and the strongly continuous exponentially stable semigroup S, which satisfies (3.1.19), (3.1.20). Then, for the random variable Z in X defined in (3.1.15), the continuous random process t → Z(θt ω ) ∈ X is β H¨older continuous for every β < 1/2. In addition, t → Aβ Z(θt ω ) is continuous. The random variable Aβ Z is tempered. Proof. It is enough to investigate the H¨older continuity for positive t. If we would like to check the H¨older continuity on [T1 , T2 ] of a T1 < 0 we simply consider Z(θt ω ) = Z(θt−T1 θT1 ω ). Assuming now that t1 , t2 ∈ [T1 , T2 ], T1 ≥ 0. Then, we have t t2 1 S(t1 − r)ω (r)dr − A S(t2 − r)ω (r)dr Z(θt1 ω ) − Z(θt2 ω ) ≤ A 0 0 0 + ω (t1 ) − ω (t2 ) + S(−r)ω (r)dr (S(t1 ) − S(t2 ))A . −∞
According to Pazy [127, Theorem 4.3.5 (iii)], the first norm on the right-hand side can be estimated by C|t2 − t1 |β |||ω |||β ,0,T2 . An estimate for the second norm is straightforward. The estimate of the last norm gives us 1+β 0 A S(−r) ω (r)dr dβ |t1 − t2 |β −∞ by (3.1.20). We will see the finiteness of the last factor in following part of the proof.
206
3 Stochastic Synchronization of Random Pullback Attractors
Now, we consider the continuity in D(Aβ ). It is enough to prove continuity for t = 0. We have A1+β
0
∞
−∞
S(−r)(θt ω (r) − ω (r))dr = ∑
−i
i=0 −i−1
A1+β S(−r)(θt ω (r) − ω (r))dr. (3.1.21)
We can now estimate the terms under the sum: 0 1+β A S(−r)( θ ω (r) − ω (r))dr t −1 ≤c
0
−1
(−r)−1−β |||θt ω − ω |||γ ,−1,0 rγ dr ≤ cβ ,γ |||θt ω − ω |||γ ,−1,0
for β < γ < 1/2. Here, we use the γ H¨older continuity of ω : θt ω (r) − ω (r) ≤ ω (t + r) − ω (t) + ω (r) − ω (0) ≤ c(ω )rγ . In a similar manner, we can estimate the other integrals for i = 1, 2, · · · 1+β −i A S(−r)(θt ω (r) − ω (r))dr −1−i 1 1+β =A S(−r + i + 1)(θt ω (r − i − 1) − ω (r − i − 1))dr 0 1 1+β ≤ S(−r + i + 1)(θt θ−i−1 ω (r) − θ−i−1 ω (r))dr A 0 1 1+β + A S(−r + i + 1)(θt ω (−i − 1) − ω (−i − 1))dr 0
−λ i
≤cβ ,γ Me
|||θt θ−i−1 ω − θ−i−1 ω |||γ ,0,1
−λ (i−1)
+ cβ ,γ Me
θt ω (−i − 1) − ω (−i − 1).
Choose a γ ∈ (γ , 1/2). Then, we know that |||θt ω |||γ ,0,1 is bounded when t is in a bounded set and then we have for ε > 0 θt ω (r) − ω (r) − (θt ω (q) − ω (q)) (q − r)γ r 0, which generates an RDS φ on X α . By concatenation, we can see that the mapping φ has
3.1 Basic Stochastics
209
the cocycle property
φ (t + τ , ω , v0 ) =S(t + τ )v0 +
t+τ
=S(t)(S(τ )v0 + +
t+τ τ
0
τ 0
S(t + τ − r)F(v(r, ω ) + Z(θr ω ))dr S(τ − r)F(v(r, ω ) + Z(θr ω ))dr)
S(t + τ − r)F(v(r, ω ) + Z(θr ω ))dr
=S(t)φ (τ , ω , v0 ) +
t 0
S(t − r)F(v(r + τ , ω ) + Z(θr+t ω ))dr
=φ (t, θτ ω , φ (τ , ω , v0 )). In particular, r → v(r + τ ) satisfies (3.1.24) with initial condition φ (τ , ω , v0 ) when we replace ω by θτ ω . Let λ1 > 0 be the smallest eigenvalue of A. Then, we have S(t) ≤ exp(−λ1t). From Chueshov [34, Lemma II.1.1], we know that for 0 ≤ ν < λ1 we have α α α αα λ1 Aα S(t) ≤ + λ1α e−λ1 t = α e−λ1 t + t α e−ν t e(−λ1 +ν )t t t t α α + mν (−λ1 +ν )t ≤ e , t > 0, tα where
mν := sup λ1α t α e−ν t . t>0
In addition, we set mν ,α v(t)α ≤ v0 α e−λ1 t +
= αα t 0
+ mν . Then, we have
mν ,α −(λ1 −ν )(t−r) e (lv(r)α + lZ(θr ω )α + cα )dr. (t − r)α
We apply the singular Gronwall lemma (see Lemma 1.2.23) to w(t) := v(t)α e(λ1 −ν )t . Hence, t
mν ,α (λ1 −ν )r e (lZ(θr ω )α + cα )dr (t − r)α t + l E1− (l (t − r)) v0 α e−ν r α 0 r mν ,α (λ1 −ν )q + e (lZ( θ ω ) + c )dq dr q α α α 0 (r − q)
w(t) ≤v0 α e−ν t +
0
(3.1.25)
210
3 Stochastic Synchronization of Random Pullback Attractors
where
l = (lmν ,α Γ (1 − α ))1/(1−α ) ,
where l depends increasingly on l. Taking into account that v(t) = e−(λ1 −ν )t w(t) we have to consider several terms from (3.1.25). Since we are interested in pullback absorption, we have to replace ω by θ−t ω . We have for D ∈ D lim
sup
t→∞ v ∈D(θ ω ) −t 0
v0 α e−λ1 t = 0.
Here, D denotes the tempered closed random sets in X α . The expression lim
t mν ,α e−(λ1 −ν )(t−r)
(t − r)α
t→∞ 0
= lim
(lZ(θ−t+r ω )α + cα )dr
0 mν ,α e(λ1 −ν )r
(−r)α
t→∞ −t
(lZ(θr ω )α + cα )dr.
has a limit for all ω contained in a θ invariant set of full measure. This follows again similar to the proof of Theorem 3.1.23 owing to the temperedness of Zα . We denote this limit by R0 (ω ). R0 is a tempered random variable that follows similar to the proof of Theorem 3.1.23. By the asymptotic behavior of E (z) and for z → ∞ for l ≥ 0 sufficiently small e−(λ1 −ν )t l
t 0
E1− α (l (t − r))
=e−(λ1 −ν )t l −(λ1 −ν )t
≤e
0 −t
sup v0 ∈D(θ−t ω )
E1− α (l (−r))
sup v0 ∈D(θ−t ω )
v0 α e−ν r dr sup
v0 ∈D(θ−t ω )
v0 α eν (−t−r) dr
v0 α (E1−α (l t) − E1−α (0))
tends toward zero for t → ∞ uniformly for v0 ∈ D(θ−t ω ) and for all D ∈ D, where D is the universe of attraction of the random closed tempered sets in X α when l is sufficiently small. This follows by the properties if the Mittag-Leffler function (see Henry [92, Lemma 7.1.1]). Now we consider e−(λ1 −ν )t l
t 0
E1− α (l (t − r))
=e−(λ1 −ν )t l =l
0
t 0
r mν ,α e(λ1 −ν )q 0
(r − q)α
(lZ(θ−t+q ω )α + cα )dqdr
mν ,α e(λ1 −ν )q (lZ(θ−t+q ω )α + cα )
e(λ1 −ν )q (lZ(θq ω )α + cα )
t E1−α (l (t − r))
(r − q)α
q
t E1− α (l (t − r))
drdq
α q+t (r − q − t) 0 E1−α (−l r) e(λ1 −ν )q (lZ(θq ω )α + cα ) drdq. =l (r − q)α −t q −t 0
drdq
3.1 Basic Stochastics
211
The last integral has exponential growth for q → −∞ for a sufficiently small rate when l is small. Hence, the last expression converges to a random variable R1 (ω ). R1 is tempered, which follows again by the proof of Theorem 3.1.23. We can then conclude that the closed ball B(ω ) in X α with center zero and radius R(ω ) = 2(R0 (ω ) + R1 (ω )) is an absorbing set in D. Let us consider C(ω ) := φ (tB (ω ) + 1, θ−tB (ω )−1 ω , B(θ−tB (ω )−1 ω ))
Xα
⊂ B(ω )
where tD (ω ) is the absorption time for D ∈ D. In particular, for D we have:
φ (t, θ−t ω , D(θ−t ω )) ⊂ B(ω ) for t ≥ tD (ω ). C(ω ) is contained in B(ω ); hence, it is an element of D. Applying again the singular Gronwall lemma as above, we see that Xα
B (θ−1 ω ) := φ (tB (ω ), θ−tB (ω )−1 ω , B(θ−tB (ω )−1 ω )) is a bounded set in X α and b (θ−1 ω ) = sup
sup
t∈[0,1] x∈B(θ−1 ω )
φ (t, θ−1 ω , x)α < ∞.1
Hence, we have for α ∈ (α , 1) sup
x∈B (θ
φ (1, θ−1 ω , x)α ≤ (α − α )α −α
−1 ω )
1
+ 0
sup
x∈B (θ−1 ω )
xα
α α (lb (θ−1 ω ) + Z(θr−1 ω )α + cα )dr (t − r)α
(3.1.26)
which is finite (see Chueshov [34, (II.1.17)]). The compact embedding X α ⊂ X α shows that C(ω ) is compact. C is absorbing because
φ (t, θ−t ω , D(θ−t ω )) ⊂ C(ω ) for t ≥ tD (θ−tB (ω )−1 ω ) +tB (ω ) + 1, which follows by the cocycle property. Straightforwardly, x → φ (t, ω , x) is continuous on X α such that we can apply Corollary 3.1.18 to find the random pullback attractor A ∈ D for the RDS generated by (3.1.24). Define now T (ω , x) = x + Z(ω ),
1
T −1 (ω , x) = x − Z(ω )
We can define B (θ−1 ω ), b (θ−1 ω ) because θ−1 is a bijection on Ω .
(3.1.27)
212
3 Stochastic Synchronization of Random Pullback Attractors
which are Carath´eodory mappings on Ω × X α such that we can define a new RDS ψ by (3.1.9). This is a solution version to (3.1.22). In particular, ψ (t, ω , u0 ) for u0 ∈ X α is (Ft )t∈R+ adapted. Now, Theorem 3.1.20 shows the existence of a random attractor for ψ .
3.1.4 Order-Preserving RDS Assume now that V is a nonempty set in a real Banach space X with a closed convex cone X+ ⊂ X such that X+ ∩ (−X+ ) = {0}. This cone defines a partial order relation on V via x ≤ y if y−x ∈ X+ . We write x < y when x ≤ y and x = y. We assume that the order relation and the topology of V are compatible in the sense that for any bounded set B ⊂ V , there exist a, b ∈ V such that a ≤ b and B lies in the interval [a, b] := {x ∈ V : a ≤ x ≤ b}. If X+ has a nonempty interior int X+ , we say that the cone X+ is solid and X is strongly ordered. A cone X+ is said to be normal if the norm · in X is semimonotone, i.e., there exists a constant c such that the property 0 ≤ x ≤ y implies that x ≤ c · y. We refer the reader to Krasnoselskij/Lifshits/Sobolev [100, Sect. 6] for other properties related to ordered spaces. The cones of nonnegative elements in Rd and in C(D), where D is compact in d R , are solid and normal. This cone in L p (D) is not solid, but it is normal. We call a nonempty subset V of X admissible if V is a Polish space with respect to the induced topology and if for any compact set K in V there exist a, b ∈ V, a ≤ b such that K ⊂ intV ([a, b] ∩V ), where intV refers to the induced topology. Any closed subset of an interval [a, b] is admissible. For other examples we refer the reader to Chueshov/Scheutzow [52]. Definition 3.1.36 (Order-Preserving RDS). An RDS φ is said to be orderpreserving if x ≤ y implies φ (t, ω , x) ≤ φ (t, ω , y) for all t ≥ 0 and ω ∈ Ω . We refer the reader to Chueshov [37] for a general theory on order-preserving RDS. The assertion below provides us with some information on the structure of a global attractor of order-preserving RDS (for the proof see Arnold/Chueshov [5] or Chueshov [37]). Theorem 3.1.37. Assume that the order-preserving RDS φ on V has a random pullback attractor A and that this attractor is bounded in the sense that there exists a random interval [b, c](ω ) = {u ∈ V : b(ω ) ≤ u ≤ c(ω )} such that [b, c](ω ) ∈ D ¯ ω ) in and A(ω ) ⊂ [b, c](ω ). Then there exist two stationary points u(ω ) and u( ¯ ω ) and the attractor A(ω ) lies in the interval [u, u]( ¯ ω ). A(ω ) such that u(ω ) ≤ u( ¯ ω ) belong to the attractor and are from below These stationary points u(ω ) and u( and from above respectively, in the pullback sense: lim φ (t, θ−t ω , w(θ−t ω )) = u(ω )
t→+∞
and
lim φ (t, θ−t ω , v(θ−t ω )) = u( ¯ ω)
t→+∞
¯ ω ) such that {w(ω )} and {v(ω )} belong to D. for all w(ω ) ≤ u(ω ) and v(ω ) ≥ u(
3.1 Basic Stochastics
213
The following theorem describes the situation when the random attractor can be trivial. Theorem 3.1.38. Let ϕ be an order-preserving RDS defined on some admissible subset V of a real separable Banach space X with cone X+ . Assume that there exists a probability measure π on the space (V, B(V )) such that the law L (φ (t, ω , x)) converges weakly to π for each x ∈ V , i.e., E f (ϕ (t, ·, x)) →
V
f (y)π (dy) as t → ∞,
(3.1.28)
for any x ∈ V and for any bounded continuous function f on V . Then there exists a unique invariant measure μ for the RDS φ : μ is defined on F ⊗ B(V )
π1 μ = P,
and φ (t, ·, ·)μ = (id, θt )μ .
This measure μ is a random Dirac measure, i.e., there is a random variable v in V such that μ = δv and π = Eδν . If X+ is normal, then for any δ > 0 and any f ≤ g, f , g ∈ V we have ! " lim P
t→∞
ω :
sup
φ (t, ω , x) − v(θt ω ) ≥ δ
= 0.
x∈[ f ,g]∩V
In particular, ν is a stationary point: we have that for a θ invariant full set Ω ∈ F
φ (t, ω , v(ω )) = v(θt ω ) for all t ∈ R+ and for all ω ∈ Ω . Then, A(ω ) = {v(ω )} is a random attractor of φ under the condition2 that every bounded subset of V is contained in a set of the form [ f , g] ∩V with f , g ∈ V , This theorem was recently improved in Flandoli/Gess/Scheutzow [76], [77]. For a more special method for obtaining single-point random attractors, see also Gess [81] or Schmalfuss [146]. Example 3.1.39 (Parabolic Semilinear Stochastic PDE). We consider the following PDE du = (Δ u + f (u))dt + d ω , (3.1.29) ∂u ∂ n = 0 on ∂ O, u(0) = u0 (x) , where O ⊂ Rd is a C∞ smooth bounded domain. We assume that f : R → R is a C1 -function possessing the properties u f (u) ≤ −lu2 +C, | f (u)| ≤ C(1 + |u| p ), where l, p and C are positive constants. 2
This condition is fulfilled, if, for example, V = [a, b], or if the cone X+ is solid and V is either X, a + X+ , or a − X+ .
214
3 Stochastic Synchronization of Random Pullback Attractors
The operator Aη = −Δ + η for η > 0 with a homogeneous Neumann boundary is a positive symmetric operator, which generates a strongly continuous semigroup Sη in C(O) (see Stewart [158]) and in Lq (O) (see Pazy [127]) for any η ∈ R and q ∈ (1, ∞). If we assume that ω is a Brownian motion in a sufficiently regular space such that for some η ∈ (0, α ). Then, following Sect. 3.1.3, we can derive the random variable Z(ω ) :=
0 −∞
Sη (τ )d ω .
If the noise is sufficiently regular in the sense that the covariance K provides that ¯ belongs almost surely to the space Z(ω ) ∈ H 2s (O), s > d/4 then t → Z(θt ω ) ∈ C(O) C(0, T,C(O)) for every T > 0 and EZ(θt ω )C(O) < ∞. Under particular conditions, there exists a unique invariant measure π on the measurable space (C(O), B(C(O))) such that property (3.1.28) holds. Conditions ensuring the uniqueness of an invariant measure for similar problems can be found in Butkovsky/Scheutzow [18], Butkovsky/Kulik/Scheutzow [17] Glatt-Holtz/Mattingle/Richards [83], Hairer/ Mattingly [84]. They consider stochastic parabolic problems with additive noise. The noise term is developed in linear combinations of one-dimensional Brownian motions and smooth functions defined ¯ on O. In particular, when we choose η > 0 sufficiently small, we obtain that % t 1 1/2 Aη u(r)2 dr t 0 t>0 is bounded allowing us to conclude the existence of an invariant measure for the Markov semigroup. Suppose that the number of eigenelements presenting the noise is sufficiently large that it gives us a unique invariant measure that is based on [83]. Then, the covariance of ω has a form allowing that Z has the regularity we assume above. Then, under the monotonicity conditions, the RDS generated by (3.1.29) has a singleton random pullback attractor and the convergence (3.1.28) follows directly. Equation (3.1.29) generates a monotone RDS in C(O). To see this we can make the change of variable v(t) := u(t) − Z(θt ω ) and note that v(t) solves the random parabolic equation vt = (Δ − η )v + f (v + Z(θt ω )) + η v + η Z(θt ω ), (3.1.30) ∂u ∂ n = 0 on ∂ O, u(0) = u0 (x). Applying the existence and uniqueness theorem and the parabolic comparison principle (see, for example, Smith [93]) to the random parabolic problem (3.1.30), we can construct an order-preserving cocycle φ in C(O). Thus, Theorem 3.1.38 im-
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme
215
plies that the RDS generated by (3.1.29) in C(O) possesses a single-point random attractor. We note that under the same assumptions as above, the RDS ψ in the space L2 (O) possesses a random pullback attractor in the universe of tempered sets D. By Theorem 3.1.37, this attractor belongs to some interval [u(ω ), u(ω )], where u(ω ) ≤ u(ω ) are stationary points. Under the condition that ω is a Brownian motion in the Sobolev space H 2s (O) for s > d/4 (satisfying the Neumann boundary conditions, if d ≥ 3, see Da Prato and Zabczyk [65, (A59)]), we can prove in the same way as in Chueshov [35] that all elements of the attractor belong to C(O). Thus, by Theorem 3.1.38, the pullback attractor of the RDS generated by (3.1.29) in L2 (O) also consists of a stationary point.
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme This is the stochastic counterpart of Sect. 2.2. We consider a system of two stochastic evolution equations and we show the existence of synchronization. We start to deal with attractors for this kind evolution equation. Later, we will consider linear coupling on the level of random attractors of stochastic PDEs and random PDEs. In the following, we will consider the synchronization of stochastic parabolic differential equations ˜ + κ(K11U − K12V )dt = F˜1 (U)dt + d ω1 , t > 0, in X1 , dU + ν1 AUdt ˜ dt − κ(K21U − K22V )dt = F˜2 (V )dt + d ω2 , t > 0, in X2 . dV + ν2 AV
(3.2.1)
The properties of the coefficients are given below. We consider random pullback attractors for this system. Later, we deal with synchronization on the level of attractors when the two equations of the system are identical. In addition, we study the conjugated version of this system providing a random evolution system. For this system, we obtain synchronization at the level of attractors. We then can derive synchronization for the stochastic version of this random system with two equations having different nonlinearities. We here assume that X1 = X2 are separable Hilbert spaces. Let X = X1 × X2 = X1 × X1 and X1α = D(A˜ α ). The norm in X is denoted by · . We consider the space X α = X1α × X1α equipped with the norm 1/2 Y α = U2α + V 2α ,
Y = (U,V ) ∈ X α ,
α ≥ 0.
In the following, we only deal with the case ν1 = ν2 , ω1 = ω2 , but we study both cases F1 = F2 and F1 = F2 . Assumption 3.2.1. Let X1 be a separable Hilbert space. We assume that
216
3 Stochastic Synchronization of Random Pullback Attractors
˜ ⊂ X1 having 1. A˜ is a linear positive symmetric operator in X1 with domain D(A) ˜ > 0. ν1 is a a compact inverse such that we have for the spectrum inf spec (A) positive number. 2. F˜i are nonlinear mappings 1/2 F˜i : X1 → X1 ,
so that there exist constants Li such that F˜i (U1 ) − F˜i (U2 ) ≤ Li U1 −U2 1/2
(3.2.2)
and satisfy linear growth conditions F˜i (U) ≤ li U1/2 + c1/2 . 3. Let ω1 be a Brownian motion with covariance Q such that trX1 QA˜ 2ε < ∞ for ε > 0. 4. The operator matrix K is given by 1 −1 K = K0 : X 1/2 → X −1 1
(3.2.3)
(3.2.4)
1/2 ˜ where K0 is a positive symmetric operator form X1 → X1 commuting with A. In particular, η0 u2 ≤ (K0 u, u), K0 u ≤ η u1/2
where η , η0 > 0. Let κ be a positive parameter. The operator
ν1 0 ˜ A=A 0 ν1
(3.2.5)
generates a strongly continuous semigroup on X denoted by S. Let us denote the nonlinearity by F˜1 (U) ˜ F(Y ) = ˜ . F2 (V ) Then we can write for the system (3.2.1) ˜ ))dt + d ω dY + AY dt = (−κK Y + F(Y
(3.2.6)
for ω = (ω1 , ω1 ). This equation should be interpreted in a mild sense. Now, we can study the existence of a random pullback attractor for (3.2.1) under the settings given above. However, we deal at first with the conjugated RDS given by a random evolution equation.
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme
217
Theorem 3.2.2. Suppose that Assumption 3.2.1 for (3.2.6) are in force. Let sκ be the infimum of the spectrum of Aκ = A + κK , which is positive. Suppose that the linear growth constants l1 , l2 are sufficiently small with respect to sκ . Then, the RDS generated by (3.2.1) has a random pullback attractor. Indeed, supposing Assumption 3.2.1. To obtain an RDS we can apply the results of Example 3.1.34 with α = 1/2 and Brownian motion ω1 on D(A˜ ε ), ε > 0 with respect to a filtration (Ft )t∈R . With this setting, we give to (3.2.6) the interpretation of (3.1.23). In particular, we consider the following conjugated mild equation Y˜ (t) = S(t)Y˜0 +
t 0
˜ Y˜ (r) + Z(θr ω1 )))dr S(t − r)(−κK Y˜ (r) + F(
(3.2.7)
where Y˜0 ∈ D(A1/2 ) and Z(ω1 ) ∈ D(A1/2 ) (see Theorem 3.1.30 and Remark 3.1.31). More precisely, we can set Z(ω1 ) =
0 −∞
S(−r)d ω1 (r)
is given by the version
0 −ν1 A˜ −∞ S (−r)ω1 (r)dr 0 1 ˜ −ν1 A −∞ S1 (−r)ω1 (r)dr
where the semigroup S1 is generated by ν1 A˜ (see Theorem 3.1.30). According to Example 3.1.34 equation (3.2.7) has a unique mild solution generating an RDS φκ in D(A1/2 ). Under Assumption 3.2.1, we obtain that the RDS has a random pullback attractor if the constants li are sufficiently small with respect to the infimum of the spectrum ˜ Indeed, we can consider as in Theorem 1.3.12 the Galerkin of the operator ν1 A. approximations to generate appropriate estimates allowing us to conclude that our system has an absorbing set and is smoothing. For our purpose of investigating synchronization we need more estimates of the solution. In particular, we do not assume that the constant η in Assumption 3.2.1 (4) is small. The following argument is again formal. It makes sense to derive the following a priori estimates for the Galerkin approximations. We do not write here the finite-dimensional projections Pm transforming (3.2.7) into an ODE. But we refer for this method to Chueshov [34, Theorem 2.2.2]. We have 1 dY˜ (t)2 1/2 + A1/2Y˜ (t)2 + κK0 (U(t) −V (t))2 2 dt ˜ Y˜ (t) + Z(θt ω1 )), Y˜ (t) = F( and
218
3 Stochastic Synchronization of Random Pullback Attractors
d dt
2 d 1 1/2 ˜ κ 1/2 2 2 ˜ A Y (t) + K0 (U(t) −V (t)) + Y (t) 2 2 dt ˜ Y˜ (t) + Z(θt ω1 )), d Y˜ (t) . = F( dt
(3.2.8)
Adding these two equalities together for the mapping 1 1/2 Vκ (Y ) = (A1/2Y 2 + κK0 (U −V )2 + Y 2 ), 2
Y ∈ X 1/2
(3.2.9)
we obtain the inequality d Vκ (Y˜ (t)) + c1Vκ (Y˜ (t)) ≤ c2 + c3 Z(θt ω1 )21/2 dt if the linear growth constants of F˜1 , F˜2 are sufficiently small. ci , i = 1, · · · , 3 are three positive constants independent of κ. Consider instead of this inequality the differential equation d v(t) + c1 v(t) = c2 + c3 Z(θt ω1 )21/2 dt
(3.2.10)
with the solution v. Then we can apply Example 3.1.22 to obtain a random tempered fixed point R20 (ω1 ) for the RDS generated by this differential equation, which can be chosen independently of κ so that t → R20 (θt ω1 ) solves (3.2.10). Because of Vκ (t, ω1 , Y˜0 ) ≤ v(t, ω1 ,Vκ (Y˜0 )) we have that X 1/2
B0 (ω1 ) = {Y ∈ X 1/2 : Vκ (Y ) ≤ 2R20 (ω1 )}
.
B0 is a forward invariant and pullback absorbing set for the RDS φκ . The mapping t → R0 (θt ω1 ) is continuous. In addition, we have 1 1/2 2 1 A Y = Y 21/2 ≤ Vκ (Y ) 2 2 so that
ω1 →
sup Y ∈B0 (ω1 )
Y 1/2
is a tempered random variable for every κ > 0. We emphasize that these estimates are derived for the Galerkin approximations but they remain true for the original equation such that we have an absorbing set for φκ . A compact absorbing set C can be constructed as in the proof of Theorem 3.1.35 applied to the unique mild solution of (3.2.7). In particular, we obtain that C(ω1 ) is bounded in X α for α ∈ (1/2, 1). However such an absorbing set could strongly depend on κ. To avoid this dependence, we consider the mild solution to
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme
Y˜ (t) = Sκ (t)Y˜0 +
t 0
˜ Y˜ (r) + Z(θr ω1 ))dr Sκ (t − r)F(
219
(3.2.11)
where the strongly continuous semigroup Sκ is defined in Lemma 1.3.19. The unique solutions to (3.2.7) and (3.2.11) are the same because they generate the same Galerkin approximations. Hence, the RDS generated by these equations are identical. However, here, according to the assumptions of Lemma 1.3.19, we suppose that K0 and A˜ commutes. Finally, we obtain a compact absorbing set independent of ˜ κ . Applying the conjugacy techκ. Hence, φκ has the random pullback attractor A nique from Theorem 3.1.35, then the original system (3.2.6) under the conditions of Theorem 3.2.2 generates an RDS ψκ , which has a random pullback attractor Aκ . Now we are in a position to formulate the first main theorem of this section. Theorem 3.2.3. Suppose that Assumption 3.2.1 holds. In addition, we suppose that F˜1 = F˜2 . Then, for sufficiently large κ∗ we have for κ > κ∗ a γ¯κ > 0 such that lim eγ¯κ t U(t) −V (t)1/2 = 0,
t→∞
Y = (U,V ),
where U, V are the Q, P components of the RDS ψκ . In addition, the random pullback attractor Aκ of the RDS generated by (3.2.6) is given by Aκ (ω1 ) = {(u, u), u ∈ A(ω1 )} 1/2
and A(ω1 ) ⊂ X1
is the random pullback attractor of the RDS generated by ˜ dt = F˜1 (u)dt + d ω1 . du + ν1 Au
(3.2.12)
Proof. We start with the conjugate system. Let ˜ Qφκ (t, ω1 , Y˜0 ) =: U(t),
Pφκ (t, ω1 , Y˜0 ) =: V˜ (t)
be the first and second components of φκ (t, ω1 , Y˜0 ). We note that U − V = U˜ − V˜ such that it is sufficient to deal with the conjugated RDS φκ . Let W = U˜ − V˜ such that dW ˜ + 2κK0W = F˜1 (U˜ + Z1 (θt ω1 )) − F˜1 (V˜ + Z1 (θt ω1 )). + ν1 AW dt and similar to the proof of Theorem 1.3.17 d 1/2 W 2 + 2ν1 λ1 W 2 + 4κK0 W 2 ≤ 2L1 W 2 . dt ˜ which is positive. If now λ1 ν1 + Here, λ1 denotes the smallest eigenvalue of A, 2κ η0 − L1 = γκ > 0, we have the desired convergence. The same convergence holds for the RDS ψκ if κ > κ∗ where κ∗ is sufficiently large. ˜ κ (ω1 ), where A ˜ κ (ω1 ) is the ran˜ V˜ ) ∈ A Now let for κ > κ∗ the element Y˜ = (U, dom pullback attractor for φκ . According to Theorem 3.2.2, such a random pullback
220
3 Stochastic Synchronization of Random Pullback Attractors
attractor exists. Then, we have by the invariance of the random pullback attractor for ˜ κ (θ−t ω1 ) so that any t ≥ 0 and ω1 a Y˜−t (ω1 ) ∈ A Y˜ = φκ (t, θ−t ω1 , Y˜−t (ω1 )) and hence U˜ − V˜ 2 = Qφκ (t, θ−t ω1 , Y˜−t (ω1 )) − Pφκ (t, θ−t ω1 , Y˜−t (ω1 ))2 ≤ e−2γκ t 2R˜ 20 (θ−t ω1 ),
˜ κ (ω1 ): where R˜ 0 (ω1 ) is the radius of a tempered ball in X with center 0 containing A R˜ 0 (ω1 )2 :=
sup Y ∈B0 (ω1 )
Y 2 .
˜ κ (ω1 ). Hence, the right-hand side tends to zero such that U˜ = V˜ for all Y˜ ∈ A To see the convergence result, it is enough to consider states x ∈ B0 (ω1 ) because the tempered random set B0 (ω1 ) ⊂ X 1/2 is forward invariant and forward absorbing. In addition, we define Xα
C(ω1 ) := φκ (1, θ−1 ω1 , B0 (θ−1 ω1 ))
.
By the construction of C(ω1 ) in the proof of Theorem 3.2.2, we know that for an 1/2 < α < 1 sup x∈B0 (θ−1 ω1 )
φκ (1, θ−1 ω1 , x)α
is tempered (see (3.1.26)). Let R1 (ω1 ) be the tempered radius of the ball BX α (0, R1 (ω1 )) containing φκ (1, θ−1 ω1 , B0 (θ−1 ω1 )). Now we can apply an interpolation argument 1/(2α )
˜ − V˜ (t)1/2 ≤cU(t) ˜ − V˜ (t)1−1/(2α ) U(t) ˜ − V˜ (t) U(t) α
˜ − V˜ (t)1−1/(2α ) R1 (θt ω1 )1/(2α ) . ≤c ¯ U(t) 1/2
The exponential (forward) convergence of W (t) in X1 for any initial state Y˜0 follows when t tends toward infinity. This proves the convergence statement for the conjugated system. Applying the transform (3.1.27), we obtain the statements of the theorem for the original equation (3.1.22).
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme
221
To see the last part of the assertions we note that for κ ≥ κ∗ we have that ˜ d (ω1 )) = ˜ θ−t ω1 )), A lim d 1/2 (φκ (t, θ−t ω1 , D( t→∞ X ˜ ω1 )} ˜ d (ω1 ) := {(U,U), U ∈ A( A
0
for D˜ ∈ D and ˜ ω1 ) = {(u, u) : u ∈ D(ω1 )}, D( 1/2
˜ is the random pullback where D is a random closed tempered set in X1 and A attractor for the conjugated system to (3.2.12), which is tempered. Indeed, by the uniqueness of the solution we can conclude that
φκ (t, ω1 , (U0 ,U0 )) = (φ0 (t, ω1 ,U0 ), φ0 (t, ω1 ,U0 )), where the two terms on the right-hand side solve the conjugated equation to (3.2.12). We obtain that ˜ d (ω1 )) = φκ (t, ω1 , A
˜ d (θt ω1 ) ⊂ A ˜ κ (θt ω1 ). {(φ0 (t, ω1 , u), φ0 (t, ω1 , u))} = A
u∈Ad (ω1 )
˜ κ is the random pullback attractor of the conjugated system to (3.2.1) Recall that A ˜ d is a tempered closed random invariunder the assumptions of Theorem 3.2.3 and A ˜ κ (ω1 ). ant set. For all closed tempered random invariant sets E, we have E(ω1 ) ⊂ A ˜ On the other hand, by the diagonal structure of Aκ for large κ ˜ κ (ω1 )) = (φ0 (t, ω1 , ·), φ0 (t, ω1 , ·))(A ˜ κ (ω1 )) = A ˜ κ (θt ω1 ) φκ (t, ω1 , A ˜ d (ω1 ) ⊃ A ˜ κ (ω1 ). such that A We now consider more general conditions for the synchronization of (3.2.1). In particular, we do not assume that F1 = F2 . We assume that the linear growth con˜ V˜ ) be the solution of the stants li of F˜i are small with respect to ν1 . Let then Y˜ = (U, conjugated system (3.2.7). Theorem 3.2.4. Let φκ be the RDS generated by (3.2.7) in X 1/2 . Suppose that Assumption 3.2.1 holds. In addition, suppose that the linear growth parameters l1 , l2 ˜ κ . We of F˜1 , F˜2 are sufficiently small. Then, φκ has a random pullback attractor A have lim d 1/2 (Aκ (ω1 ), A κ→∞ X ˜
˜ d (ω1 )) = 0,
˜ d (ω1 ) = {(y, y) : y ∈ A( ˜ ω1 )} A
˜ is the random pullback attractor of the RDS generated by where A ˜ = 1 (F˜1 (y + Z1 (θt ω1 )) + F˜2 (y + Z1 (θt ω1 ))) yt + ν1 Ay 2
(3.2.13)
222
3 Stochastic Synchronization of Random Pullback Attractors
where Z1 (ω1 ) = −ν1 A˜
0 −∞
S1 (−r)ω1 (r)dr
˜ In addition, we have that for every ε > 0, and S1 is the semigroup generated by ν1 A. there exists a κ∗ (ε ) > 0 such that for every κ > κ∗ (ε ), there is a t(κ, ε ) such that for t > t(κ, ε ), we have that P(Qφκ (t, ω1 ,Y0 ) − Pφκ (t, ω1 ,Y0 )1/2 > ε ) < ε . Proof. Similar to (3.2.9), we obtain that 1 1/2 2 ˜ + A˜ 1/2V˜ (t)2 ) + κK0 (U(t) −V (t))2 + Y˜ (t)2 ) Vκ (Y˜ ) = (ν1 (A˜ 1/2U(t) 2 satisfies the differential inequality d Vκ (Y˜ (t)) + c1Vκ (Y˜ (t)) ≤ c2 + c3 A1/2 Z1 (θt ω1 )2 dt where the positive constants ci can be chosen independently of κ > 0. Applying ˜ κ for φκ Theorem 3.2.2, we obtain the existence of a random pullback attractor A in X 1/2 . In particular, there exists a tempered random variable R independent of κ such that ˜ 2 + A˜ 1/2V˜ 2 ) + κK 1/2 (U˜ − V˜ )2 + Y˜ 2 ≤ R(ω1 )2 ν1 (A˜ 1/2U 0
(3.2.14)
˜ κ (ω1 ). We can assume that R has similar properties to R0 defined ˜ V˜ ) ∈ A for Y˜ = (U, in the proof of Theorem 3.2.3 so that we can assume that φκ has an absorbing set in X 1/2 that does not depend on κ and that is forward invariant. Now we consider the equation (3.2.13). The solution of this equation generates a continuous RDS φ0 1/2 ˜ of this RDS exists under the smallness on X1 . The random pullback attractor A 1/2 assumptions of the linear growth constants li , is in X1 . We now show that for κ → d ˜ . For every ω1 ∈ Ω , let Y˜κ be a solution ˜ κ converge to A ∞, the pullback attractors A ˜ κ (ω1 ) ⊂ C(ω1 ) and to (3.2.7) on some interval [0, T ] with initial condition xκT ∈ A that Y˜κ (0) satisfies the energy inequality (3.2.14). C serves as a compact absorbing set for the RDS φ0 too. To construct this set, we use (3.2.11). We can assume that C(ω1 ) is bounded in X α , where α ∈ (1/2, 1). Finally, we can use the multiplier AY˜κ in a formal way, i.e., on the level of Galerkin approximations. We have 1 dA1/2Y˜κ (t)2 1/2 + ν1 (A˜U˜ κ (t)2 + A˜V˜κ (t)2 ) + 2κK0 A˜ 1/2 (U˜ κ (t) − V˜κ (t))2 2 dt ˜ κ (t)) + (F˜2 (V˜κ (t) + Z1 (θt ω1 )), A˜V˜κ (t)) =(F˜1 (U˜ κ (t) + Z1 (θt ω1 )), AU 1 ˜ ˜ 1 ˜ ˜ F1 (Uκ (t) + Z1 (θt ω1 ))2 + F2 (Vκ (t) + Z1 (θt ω1 ))2 ≤ 2ν1 2ν1 1 + (ν1 A˜U˜ κ (t)2 + ν1 A˜V˜κ (t)2 ). 2
3.2 Synchronization in Coupled Parabolic Models: Abstract Scheme
223
After integration of this inequality, we obtain A1/2Y˜κ (T )2 + v1 ≤
T 0
T 0
AY˜κ (r)2 dr + η0 κ
T 0
A˜ 1/2 (U˜ κ (r) − V˜κ (r))2 dr
C(l1 , l2 , ν , c1/2 )(R(θt ω1 ) + Z1 (θt ω1 )1/2 + 1)2 dt + Y˜κ (0)21/2 ,
where the right-hand side can be bounded independently of κ assuming that Yκ (0) ˜ κ (ω1 ) (see (3.2.14)). This inequality then also makes sense for the solution is in A of the original equation. Assume that {Y˜κ (0)}κ>0 is bounded in X 1/2 . Let Y˜ be a limit point of the set {Y˜κ }κ>0 . We can conclude from the above inequality that T
lim
κ→∞ 0
V˜κ (r) − U˜ κ (r)21/2 dr = 0.
(3.2.15)
˜ κ (ω1 ). We can derive from (3.2.10) that Assume that Y˜κ (0) ∈ A
η0 κ QY˜κ (0) − PY˜κ (0)2 ≤ R20 (ω1 ), 2
(3.2.16)
and similar for
η0 κ QY˜κ (t) − PY˜κ (t)2 . 2 Then we can conclude from (3.2.8) that dY˜κ /dt is uniformly bounded in L2 (0, T, X) for any T > 0. We obtain the uniform boundedness Y˜κ ∈ L2 (0, T, X 1 ) ∩ L∞ (0, T, X 1/2 ),
d ˜ Yκ ∈ L2 (0, T, X). dt
For the last property we integrate (3.2.8). Now we can apply the compactness argument in Chepyshov/Vishik [28, Theorems II.1.4 and II.1.5] to see that the set {Y˜κ }κ>0 is relatively compact in C([0, T ], X γ ) and L2 (0, T, X 1/2+γ ) for γ ∈ [0, 1/2). In addition, this set of solutions is relative weakly compact in L2 (0, T, X 1 ). Let Y˜0 (·) = (U˜ 0 (·), V˜0 (·)) a limit point with respect to this compactness result. We can assume that for κ > 0, the elements Y˜κ (0) are included in the compact set C(ω ) ⊂ X 1/2 . Then there exists a sequence κn → ∞ so that we have in X 1/2 lim Y˜κn (0) = Y0 = (y0 , y0 )
(3.2.17)
n→∞
where the last equality follows by (3.2.16). We have that U˜ κ + V˜κ U˜ κ − V˜κ = U˜ κ − , 2 2
(3.2.18) 1/2
where the latter difference tends toward zero for κ → ∞ in L2 (0, T, X1 ) by (3.2.15). We obtain a similar relation when we exchange U˜ κ and V˜κ .
224
3 Stochastic Synchronization of Random Pullback Attractors
We obtain that U˜ 0 = V˜0 = y. Then, the limit points of Y˜κ (·) in X are a solution of a pair of the equations (3.2.13) with initial condition (y0 , y0 ). However, we would like to prove lim Y˜κn (t) − (y(t), y(t))1/2 = 0
κn →∞
(3.2.19)
for all t ∈ [0, T ] for some sequence κn → ∞. To see (3.2.19), we consider dU˜ κ (t) − y(t)21/2
+
dV˜κ (t) − y(t)21/2
dt dt 1 0 1 0 d(V˜κ (t) − y(t)) ˜ ˜ d(U˜ κ (t) − y(t)) ˜ ˜ , A(Uκ (t) − y(t)) + 2 , A(Vκ (t) − y(t)) =2 dt dt The derivatives in the above formula exist in a weak sense in L2 (0, T, X1 ) by Temam [162, Chapter III.1] . A straightforward calculation of these expressions yields 1/2
˜ U˜ κ (t) − y(t))2 − A( ˜ V˜κ (t) − y(t))2 ) − 2κK (U˜ κ (t) − V˜κ (t))2 −2ν1 (A( 1/2 0 + 2(F˜1 (U˜ κ (t) + Z1 (θt ω1 )) ˜ U˜ κ (t) − y(t))) − F˜1 (y(t) + Z1 (θt ω1 )) − F˜2 (y(t) + Z1 (θt ω1 )), A( + 2(F˜2 (V˜κ (t) + Z1 (θt ω1 )) ˜ V˜κ (t) − y(t))), − F˜1 (y(t) + Z1 (θt ω1 )) − F˜2 (y(t) + Z1 (θt ω1 )), A( where (·, ·) is the inner product in X1 . We then have the estimates (2(F˜1 (U˜ κ (t) + Z1 (θt ω1 )) ˜ U˜ κ (t) − y(t))) − F˜1 (y(t) + Z1 (θt ω1 )) − F˜2 (y(t) + Z1 (θt ω1 )), A( =(2(F˜1 (U˜ κ (t) + Z1 (θt ω1 )) − 2F˜1 (y(t) + Z1 (θt ω1 )) + F˜1 (y(t) + Z1 (θt ω1 )) ˜ U˜ κ (t) − y(t))) − F˜2 (y(t) + Z1 (θt ω1 )), A( ≤2L1 U˜ κ (t) − y(t)1/2 A(U˜ κ (t) − y(t)) ˜ U˜ κ (t) − y(t))) + (F˜1 (y(t) + Z1 (θt ω1 )) − F˜2 (y(t) + Z1 (θt ω1 )), A( and similar for the remaining term. Now, we integrate the above derivatives of the square norms from zero to t ≤ T . According to the Cauchy–Schwarz inequality and the boundedness of U˜ κn , V˜κn in L2 (0, T, X11 ) and taking the above convergences of U˜ κn , V˜κn , Y˜κn for κn → ∞ into account gives the desired convergence. In particular, we have to apply the weak convergence in L2 (0, T, X11 ) of U˜ κn and V˜κn to y. 1/2 Let now φ0 be the RDS defined on the space {(y, y) : y ∈ X1 } generated by a pair of equations of (3.2.13) with the initial condition (y0 , y0 ) ∈ X 1/2 . This RDS ˜ Simgenerated by the single equation (3.2.13) has the random pullback attractor A. ilar arguments to the proof of Theorem 3.1.25 allow us to show the convergence of ˜ d (ω1 ) for κ → ∞. Indeed, A ˜ κ (ω1 ), A ˜ d (ω1 ) is included in a compact set ˜ κ (ω1 ) to A A
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
225
C(ω1 ), which is independent of κ. Thus, we have for any sequence (Y˜κn (0))n∈N in ˜ κ (ω1 ) ⊂ C(ω1 ) a convergent subsequence (Y˜κ (0))n∈N in X 1/2 with a limit point A n n (y0 , y0 ) (see (3.2.17)). Now we can apply (3.2.19): lim φκn (t, ω1 , Y˜κn (0)) − φ0 (t, ω1 , (y0 , y0 )1/2 = 0
n →∞
for all ω1 ∈ Ω . ˜ ω1 ) To see the second statement of the theorem, we note that for all y ∈ A( Qφκ (t, θ−t ω1 ,Y0 ) − Pφκ (t, θ−t ω1 ,Y0 )1/2 =Qφκ (t, θ−t ω1 ,Y0 ) − y − (Pφκ (t, θ−t ω1 ,Y0 ) − y)1/2 ≤2
inf
˜ κ ( ω1 ) y∈ ˆ A
˜ κ (ω1 )) φκ (t, θ−t ω1 ,Y0 ) − (y, ˆ y) ˆ 1/2 = 2dX 1/2 (φκ (t, θ−t ω1 ,Y0 ), A
and on the other hand ˜ d (ω1 )) dX 1/2 (φκ (t, θ−t ω1 ,Y0 ), A ˜ κ (ω1 )) + d ≤d 1/2 (φκ (t, θ−t ω1 ,Y0 ), A
X 1/2 (Aκ (ω1 ), A
X
˜
˜ d (ω1 )).
(3.2.20)
For any ε > 0 given we have for κ > κ∗ (ε ) that the last distance is less than or equal to ε /2 with a probability larger than 1 − ε /2. In addition, for all t ∈ R ˜ κ (ω1 ), A ˜ d (ω1 )) > ε ) = P(d 1/2 (A ˜ κ (θt ω1 ), A ˜ d (θt ω1 )) > ε ). P(dX 1/2 (A X 2 2 For the first term of the right-hand side of (3.2.20) we have for κ given ˜ κ (θt ω1 )) > ε ) = 0. lim P(dX 1/2 (φκ (t, ω1 ,Y0 ), A 2
t→∞
Hence, we can conclude that the right-hand side of (3.2.20) replacing ω1 by θt ω1 converges to zero in probability. It is not hard to see that we obtain the same synchronization results for the original system (3.2.1) under Assumptions 3.2.1. Indeed, we have Uκ (t) −Vκ (t) = U˜ κ (t) − V˜κ (t).
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain In this section, we deal with stochastic perturbation of the reaction–diffusion model with the interface coupling considered in Sect. 1.5. Such a stochastic partial differential equation is a special version of a stochastic evolution equation introduced above. Random influences are included through an additive Brownian motion and depend only on the base spatial variable x ∈ Γ but not on the spatial variable in the
226
3 Stochastic Synchronization of Random Pullback Attractors
thin direction. Moreover, the noise is the same in both layers. Limiting properties of the global random attractor are established as the thinness parameter of the domain ε → 0, i.e., as the initial domain becomes thinner, when the intensity function possesses the property limε →0 ε −1 k(x , ε ) = +∞. In particular, the limiting dynamics is described by a single stochastic parabolic equation with the averaged diffusion coefficient, and nonlinearity term, which, as in the deterministic case, essentially indicates synchronization of the dynamics on both sides of the common base Γ . Moreover, in the case of nondegenerate noise, we obtain stronger synchronization phenomena in comparison with analogous results in the deterministic case presented in Sect. 1.5. This effect is due to the uniqueness statement in Theorem 3.1.38. Let O1,ε and O2,ε be thin bounded domains in Rd+1 , where d ≥ 1, of the form O1,ε = Γ × (0, ε ),
O2,ε = Γ × (−ε , 0),
where 0 < ε ≤ 1 and Γ is a bounded and sufficiently smooth domain in Rd . We write x ∈ Oε := O1,ε ∪ O2,ε as x = (x , xd+1 ), where x ∈ Γ and xd+1 ∈ (0, ε ) or xd+1 ∈ (−ε , 0) and does not distinguish between the sets Γ × {0} ⊂ Rd+1 and Γ ⊂ Rd . We consider the following system of semilinear parabolic equations dU i + (−νi Δ U i + aU i )dt = ( fi (U i ) + hi (x))dt + d ω (t, x ),
t > 0, x ∈ Oi,ε , i = 1, 2, (3.3.1)
with the initial data U i (0, x) = U0i (x),
x ∈ Oi,ε , i = 1, 2,
where ω (t, x ) is a Brownian motion depending on the spatial variable x ∈ Γ (but not on the xd+1 spatial variable). We assume that U 1 and U 2 satisfy Neumann boundary conditions i x ∈ ∂ Oi,ε \ Γ , i = 1, 2, ∇U , ni = 0, on the external part of the boundary of the compound domain Oε , where n is the outer normal to ∂ Oε , and a matching condition on Γ of the form ∂U1 1 2 + k(x , ε )(U −U ) = 0, − ν1 ∂ xd+1 Γ (3.3.2) ∂U2 ν2 + k(x , ε )(U 2 −U 1 ) = 0. ∂ xd+1 Γ Here, above the constants νi and a are positive numbers.
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
227
Assumption 3.3.1. We impose the following hypotheses. 1. For i = 1, 2 the function fi ∈ C1 (R) possesses the property fi (v) ≤ c for all v ∈ R and also satisfies the relations − v fi (v) ≥ a0 |v| p+1 − c,
| fi (v)| ≤ a1 |v| p−1 + c,
v ∈ R,
(3.3.3)
where a0 , a1 and c are positive constants and 1 ≤ p < 3. 2. hi ∈ H 1 (Oi,ε ), i = 1, 2. 3. The interface reaction intensity k(x , ε ) satisfies k(·, ε ) ∈ L∞ (Γ ), and
k(x , ε ) > 0 for x ∈ Γ , ε ∈ (0, 1],
1 lim k(x , ε ) = +∞, x ∈ Γ , in Lebesgue measure.3 ε
ε →0
(3.3.4)
4. ω is a two-sided Brownian motion in H s (Γ ), where s > d/2 and s ≥ 1 with appropriate covariance operator Q. Our main example of the interface reaction intensity is the following function k(x , ε ) = ε α k0 (x ) ∈ L∞ (Γ ),
k0 (x ) > 0 for x ∈ Γ , ε ∈ (0, 1],
for some α ∈ [0, 1). As already mentioned in Sect. 1.5 the problem in (3.3.1) is a model for a reaction– diffusion system consisting of two components filling thin contacting layers O1,ε and O2,ε separated by a penetrable membrane Γ . The stochastic version considered here allows for irregularities and random effects on the separating membrane. We investigate the pathwise asymptotic behavior of the above stochastic evolution system by converting it into a system of pathwise random PDE to which deterministic methods can be applied in a pathwise manner that is similar to (3.1.34). We deal with the properties of random pullback attractors for the RDS generated by (3.3.1) in L2 (Oε ). In particular, we prove that these pullback attractors are closely related to the corresponding object for the problem
dU + (−νΔ U + aU)dt = ( f (U) + h(x ))dt + d ω (t, x ),
t > 0, x ∈ Γ , (3.3.5)
on the spatial domain Γ with the Neumann boundary conditions on ∂Γ an Δ being the Laplacian on Γ . Here, we denote
ν=
ν1 + ν2 , 2
f (U) =
f1 (U) + f2 (U) , 2
h(x ) =
h1 (x , 0) + h2 (x , 0) . 2
(3.3.6)
This is essentially a statement about the synchronization of the dynamics of the system in the two thin layers at the level of pullback attractors. Since, in principle, 3
See (1.5.6) for the definition of this convergence.
228
3 Stochastic Synchronization of Random Pullback Attractors
a global attractor can be a rather complicated set, the synchronization at this level does not imply that any pair of trajectories becomes asymptotically synchronized. However, under particular assumptions on the covariance of the driving Brownian motion, we can prove, in contrast to the deterministic counterpart, that the global pullback attractor for (3.3.5) is a singleton. This means that we also have asymptotic synchronization in our system at the level of trajectories. Thus, we observe a stronger synchronizing effect of a nondegenerate stochastic noise in the system under consideration. Below, we mainly follow the presentation given in Caraballo/Chueshov/Kloeden [20].
3.3.1 Random Dynamics in the Two-Layer Model We now introduce the Ornstein–Uhlenbeck process as a stationary solution of the linear stochastic evolution equation dZ = (νΔ Z − aZ)dt + d ω (t, x ),
t > 0, x ∈ Γ ,
(3.3.7)
on the spatial domain Γ with homogeneous Neumann boundary conditions on ∂Γ . Here, as above, we denote ν = (ν1 + ν2 )/2. According to Theorem 3.1.30, we introduce the random variable Z such that t → Z(θt ω ) is a stationary point for (3.3.7) where the generator A0 of the strongly continuous semigroup is given by the symmetric and positive operator −νΔ + a, a > 0 and Δ is the Laplace operator in L2 (Γ ). To obtain a Z ∈ C(Γ¯ ) ∩ D(A0 ), we need a sufficiently regular Brownian motion determined by the covariance Q. Thus, in particular, Z satisfies the homogeneous Neumann boundary conditions. We also note that by the Sobolev lemma, since H s (Γ ) ⊂ C(Γ ) for s > d/2 and s ≥ 2 for Z ∈ D(A0 ), we have that t → Z(θt ω ) is a pathwise continuous process with values in D(A0 ) ∩ C(Γ ) if Q is sufficiently regular. The random variable Z is tempered with respect to D(A0 ) ∩ C(Γ¯ ). Let μ μ −β μ > d/2 and μ ≥ 1. Then we have that A0 Z(ω ) is well defined if A0 ω is a Brownian motion of finite trace, where β < 1/2. This is the case when ω is a finitedimensional solution; namely, if l
ω (t) = ∑ ωi (t)ei
(3.3.8)
i=1
where ωi are independent one-dimensional Brownian motions and ei are elements of the orthonormal base generated by A0 . In particular, Z(θt ω ) ∈ C R,C(Γ ) ∩ ψ ∈ H 2 (Γ ) : ψ satisfies Neumann b.c. on ∂Γ (3.3.9) for every ω ∈ Ω . In particular, a perfect version of t → Z(θt ω ) is a stationary point to (3.3.7). We will use this observation later.
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
229
There are several papers dealing with equations driven by finitely many noise modes. In particular, conditions for the uniqueness for the invariant measure for the Markov semigroup are formulated for the type of equations considered above. We refer the reader to Butkovsky/Scheutzow [18] or Hairer/Mattingly [84] for semilinear equations and Glatt-Holtz/Mattingle/Richards [83] for fluid dynamical problems. Now we can state the following assertion on the well-posedness of problems (3.3.1) and (3.3.5). Theorem 3.3.2 (Generation of RDS). Problem (3.3.1) generates an RDS ψ¯ ε in the space Xε = L2 (O1,ε ) ⊕ L2 (O2,ε ) ∼ L2 (Oε ) and the RDS ψ¯ ε defined by the formula ψ¯ ε (t, ω ,U0 ) = U(t, ω ), where U(t, ω ) = (U 1 (t, ω ),U 2 (t, ω )) is a solution to the problem in (3.3.1). Similarly, the problem in (3.3.5) generates an RDS ψ0 in the space L2 (Γ ). Proof. We split our argument into several steps. 1. EQUIVALENT RANDOM PDE. We first rewrite our stochastic system as a random system. For this, we introduce the new dependent variables V i (which are also random processes): V i (t, x, ω ) := U i (t, x , xd+1 , ω ) − Z(θt ω , x ),
t > 0, x = (x , xd+1 ) ∈ Oi,ε , i = 1, 2,
where Z(ω , x ) is given by (3.3.7). Let l1 (x, ω ) = + 12 (ν1 − ν2 )Δ Z(ω ) + h1 (x), l2 (x, ω ) = − 12 (ν1 − ν2 )Δ Z(ω ) + h2 (x) and Δ is the Laplacian on Γ . Then, equations (3.3.1) can be transformed into the pathwise random semilinear parabolic PDE Vti − νi Δ V i + aV i = fi V i + Z(θt ω ) + li (x, θt ω ), t > 0, x ∈ Oi,ε , (3.3.10) for i = 1, 2, with the random initial data V i (0, x, ω ) = U0i (x) − Z(ω ),
x ∈ Oi,ε , i = 1, 2.
Since Z(ω , x ) does not depend on xd+1 , due to (3.3.9), we obtain the Neumann boundary conditions i ∇V (x), ni (x) = 0, x ∈ ∂ Oi,ε \ Γ , i = 1, 2, on the external part of the boundary of the compound domain Oε , where n is the outer normal to ∂ Oε . Condition (3.3.2) turns into a matching condition on Γ of the
230
3 Stochastic Synchronization of Random Pullback Attractors
∂V 1 + k(x , ε )(V 1 −V 2 ) = 0, − ν1 ∂ xd+1 Γ
form
∂V 2 ν2 + k(x , ε )(V 2 −V 1 ) = 0, ∂ xd+1 Γ Indeed, in the difference, we have U 1 −U 2 = V 1 −V 2 . In addition, the derivative of Z in xd+1 direction is zero. 2. SCALING AND FUNCTIONAL SPACES. It is convenient to deal with a fixed domain where every equation is defined for ε > 0. Let us introduce the new coordinates (x, y) ∈ Rd+1 , as follows. x = x ,
x ∈Γ,
y = ε −1 xd+1 ,
y ∈ (−1, 1).
In doing so, we transform the domain Oε into D = O1 ∪ O2 , where O1 = Γ × (0, 1), O2 = Γ × (−1, 0), the operator ∇ = (∇x , ∂xd+1 ) into ∇ε = (∇x , ε −1 ∂y ), Δ = Δ + ∂x2d+1 into Δε = Δ + ε −2 ∂yy . The problem in (3.3.10) takes the form vti − νi Δε vi + avi = fi (vi + Z(θt ω )) + liε (x, y, θt ω ),
t > 0, (x, y) ∈ Oi , (3.3.11)
for i = 1, 2, and liε (x, y, ω ) = li (x, ε y, ω ). with the initial data vi (0, x, y) = V i (0, x, ε y),
(x, y) ∈ Oi ,
i = 1, 2,
and the boundary conditions ∂ vi = 0, ∂ ni ∂ Oi \Γ
i = 1, 2,
∂ vi − ε k(x, ε )(v1 − v2 ) νi = 0, ∂y y=0
(3.3.12)
i = 1, 2.
(3.3.13)
A solution V (t, x , xd+1 ) to the problem in (3.3.10) is expressed in terms of a solution v(t, x, y) to problem (3.3.11) by the formula V (t, x , xd+1 ) = v(t, x , ε −1 xd+1 ). Let us introduce the space X := L2 (O1 ) ⊕ L2 (O2 ) = L2 (D) endowed with the norm u2X := u2 := u1 2L2 (O1 ) + u2 2L2 (O2 ) , where u = (u1 , u2 ), ui := u|Oi , and let us define a family of Sobolev spaces 1/2
Xε
= H 1 (O1 ) ⊕ H 1 (O2 )
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
231
endowed with the (square) norm 2 u21,ε := ∑ ui 2H 1 (Oi ) + ε −2 ∂y ui 2L2 (Oi ) ,
ε ∈ (0, 1].
i=1
1/2
Every element v ∈ H 1 (Γ )⊕ H 1 (Γ ) can be extended naturally to an element u ∈ Xε by the formula ui (x, y) = vi (x), (x, y) ∈ Oi , i = 1, 2; in what follows, this will be done without further comment. 3. ABSTRACT REPRESENTATION. Now we represent problem (3.3.11) in the abstract form. To do this, we first consider the bilinear form
2 1 aε (u, v) = ∑ νi (∇x ui , ∇x vi )L2 (Oi ) + 2 (∂y ui , ∂y vi )L2 (Oi ) + a · (u, v)X ε i=1 +
1 ε
Γ
k(x, ε )(u1 (x, 0) − u2 (x, 0))(v1 (x, 0) − v2 (x, 0)) dx, 1/2
defined on the elements u = (u1 , u2 ), v = (v1 , v2 ) of the space Xε . One can show that aε (u, v) is a closed symmetric form in X possessing the property c0
∑
i=1,2
u2H 1 (Oi ) ≤ c1 u21,ε ≤ aε (u, u),
1/2
u ∈ Xε . 1/2
(3.3.14)
1/2
In addition, aε is a continuous bilinear form on Xε × Xε , see Sell/You [149, p. 85f.]. Here and in the sequel we drop the subscript ε in constants, which can be chosen independently of ε ∈ (0, 1]. Therefore, there exists a unique positive self1/2 adjoint operator Aε such that D(Aε ) ⊂ Xε and aε (u, v) = (Aε u, v)X ,
1/2
u ∈ D(Aε ), v ∈ Xε .
It can be shown that D(Aε ) = u ∈ H 2 (O1 ) ⊕ H 2 (O2 ) : u satisfies (3.3.12) and (3.3.13) , and also that Aε u = (−ν1 Δε u1 + au1 , −ν2 Δε u2 + au2 ), 1/2
1/2
u = (u1 , u2 ) ∈ D(Aε ).
1/2
Moreover, D(Aε ) = Xε , aε (u, u) = Aε u2 . For more details concerning the operator Aε , we refer the reader to Chueshov/Rekalo [137]. Now we can rewrite the pathwise random PDE in the problem in (3.3.11) in the abstract form vt + Aε v = F(v, θt ω ),
v|t=0 = v0 ,
(3.3.15)
232
3 Stochastic Synchronization of Random Pullback Attractors
in the space X, where F(v, ω ) =
⎧ ⎨ f1 (v1 + Z(ω )) + l1ε (x, y, ω ), y > 0; ⎩ f (v2 + Z(ω )) + l ε (x, y, ω ), y < 0. 2 2
We suppress here the typing that F depends on ε . In particular, by the regularity of Δ Z appearing in li this expression is well defined. For the following, we refer the reader to Example 3.1.33. 4. GENERATION OF AN RDS. The method finding a unique global weak solution of the above system is standard (see Showalter [153, Chap. 3]). In Chepyzhov/Vishik [28, Chap. 15] for the general nonautonomous case, it is proved that for each ω ∈ Ω and v0 ∈ X on any time interval [0, T ], there exists a unique weak solution v(t, ω ) to (3.3.15) in the class 1/2
L p+1 (0, T, L p+1 (D)) ∩ L2 (0, T, Xε ) ∩C([0, T ], X). A weak solution gives a mild solution (see DaPrato/Zabczyk [65, Theorem 6.5]) and vice versa. This solution then generates an RDS φε on X. Since this solution can be constructed as a limit of the corresponding Galerkin approximations, the mapping ω → φε (t, ω , x) is measurable. In addition, the continuity of t → φε (t, ω , x) and x → φε (t, ω , x) gives the B(R+ ) ⊗ F ⊗ B(X), B(X) measurability. Now, using inverse transformation, we define the cocycle ψε . First, we define for the scaled version of the problem in (3.3.1) an RDS by the formula
ψε (t, ω ) = Tε (θt ω ) ◦ φε (t, ω ) ◦ Tε−1 (ω ), where Tε (ω ) : X → X (see (3.1.9)). Similarly, we can define RDS φ¯ε and ψ¯ ε for the unscaled versions of the equations generating φε and ψε . In particular, ψ¯ ε is the RDS generated by (3.3.1) 5. GENERATION STATEMENT FOR THE LIMIT SYSTEM. The same change of unknown variable U = V + Z transforms the equation in (3.3.5) into the following random PDE on Γ ⎧ ⎪ ⎨ Vt − νΔ V + aV = f (V + Z(θt ω )) + h(x ), t > 0, x ∈ Γ , (3.3.16) ⎪ ⎩ ∂ V = 0, V (0) = V0 , ∂n ∂Γ
where ν , f (v) and h are given by (3.3.6). The same argument as above allows us to prove that the problem in (3.3.16) generates an RDS φ0 in the space L2 (Γ ).
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
233
3.3.2 The Statement on Synchronization Our main result says that the limiting dynamics of the system (3.3.1) is given by that of the averaged system (3.3.5) on Γ , which one can interpret as the synchronization of dynamics of the original system on the two sides of the membrane Γ . In addition, if the system is the same on both sides of the membrane, then the limiting behavior is independent of the thinness parameter ε when it is sufficiently small. Theorem 3.3.3. Let ψ¯ ε be the RDS generated by (3.3.1) and ψ0 be generated by (3.3.5). Then, under Assumption 3.3.1, the following assertions hold. 1. The cocycles ψ¯ ε converge to ψ0 in the sense that 1 ε →0 t∈[0,T ] ε lim sup
Oε
|ψ¯ ε (t, ω , v)[x] − ψ0 (t, ω , v)[x]|2 dx = 0,
∀ω ∈ Ω ,
for any v(x) ∈ Xε independent of the variable xd+1 , and any T > 0. 2. In their corresponding phase spaces the RDS ψ¯ ε and ψ0 have random pullback attractors Aψ¯ ε and Aψ0 . Moreover, in addition to the assumptions on Q above, if we assume that the covariance operator Q of the Brownian ω generates a Markov family for ψ0 with a unique invariant measure, then the attractor Aψ0 is a singleton, i.e., Aψ0 (ω ) = {a0 (ω )}, where a0 (ω ) is a tempered random variable with values in L2 (Γ ), which is a stationary point. 3. The attractors Aψ¯ ε are upper semicontinuous as ε → 0 in the sense that " ! 1 2 lim sup |v(x , xd+1 ) − v0 (x )| dx = 0, ∀ω ∈ Ω . inf ε →0 v∈A (ω ) v0 ∈Aψ0 (ω ) ε Oε ψ¯ ε (3.3.17) 4. In addition, if
ν1 = ν2 := ν , f1 (U) = f2 (U) := f (U), h1 (x , xd+1 ) = h(x ), h2 (x , xd+1 ) = h(x );
(3.3.18)
f (U) is globally Lipschitz, i.e., there exists a constant L > 0 such that | f (U) − f (V )| ≤ L|U −V |,
U,V ∈ R,
(3.3.19)
and also that k(x , ε ) > kε for x ∈ Γ , ε ∈ (0, 1] such that lim ε −1 kε = +∞, ε →0
(3.3.20)
then there exists ε0 > 0 such that for all ε ∈ (0, ε0 ] the random pullback attractor Aψ¯ ε for ψ¯ ε has the form Aψ¯ ε (ω ) := v(x , xd+1 ) = v0 (x ) : v0 ∈ Aψ0 (ω ) .
234
3 Stochastic Synchronization of Random Pullback Attractors
Remark 3.3.4. In the case when Aψ0 (ω ) = {a0 (ω )} is a singleton, the relation in (3.3.17) turns into the equality % 1 2 lim sup |v(x , xd+1 , ω ) − a0 (x , ω )| dx = 0. ε →0 v∈A (ω ) ε Oε ψ¯ ε In particular, this implies that for any U0 ,U0∗ ∈ Xε , we have that % 1 ∗ 2 ψ¯ ε (t, θ−t ω ,U0 ) − ψ¯ ε (t, θ−t ω ,U0 )L2 (Oε ) = 0, lim lim sup ε →0 t→+∞ ε
∀ω ∈ Ω , (3.3.21)
where we can omit the limε →0 under conditions (3.3.18)–(3.3.20). Thus, we obtain the synchronization effect not only at the level of attractors (see (3.3.17)) but also at the level of trajectories in relation (3.3.21). We emphasize that this double synchronization phenomenon is not true for the deterministic (Q ≡ 0) counterpart of the problem. In the latter case, the global attractor for (3.3.5) (without noise) is not a single point when the reaction term au − f (u) has several roots, and thus (3.3.21) cannot be true for all initial data. In this case, we have synchronization at the level of the global attractors only. Remark 3.3.5. The statements of Theorem 3.3.3 deal with the case when the intensity interaction k(x , ε ) between layers is asymptotically strong enough (see the condition in (3.3.4)). However, as in Sect. 1.5, we can consider the case when the limit in (3.3.4) is finite by assuming that lim ε −1 k(x , ε ) = k(x )
ε →0
strongly in L2 (Γ )
(3.3.22)
for some bounded nonnegative function k(x ) ∈ L2 (Γ ). In this case, the limiting problem for (3.3.1) is a system of two parabolic stochastic PDEs on Γ of the form dU i − (νi Δ U i + aU i − fi (U i ) + k(x )(−1)i+1 (U 1 −U 2 ) − hi (x , 0))dt = d ω (t, x ), (3.3.23) where i = 1, 2 and (t, x ) ∈ R+ × Γ , with the Neumann boundary condition on ∂Γ . Using the same method as for the case in (3.3.4) in combination with deterministic arguments given in Sect. 1.5, we can prove upper semicontinuity of Aψ¯ ε in the limit ε → 0 in the case (3.3.22). However, we will not present the case because (i) our main point of interest is the phenomenon of synchronization, and (ii) under the condition in (3.3.22), synchronization is possible only in some very special cases.
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
235
3.3.3 Existence of Random Pullback Attractors Now we prove the existence of a random pullback attractor for the problem in (3.3.11) for every fixed ε ∈ (0, 1] and also for the limiting problem (3.3.5). The Case ε > 0: We first want to emphasize that we do not use any information concerning the behavior of the intensity k(x , ε ) as ε → 0 and hence our results in this subsection cover both of the cases in (3.3.4) and (3.3.22). We restrict ourselves to showing the existence of pullback attractor Aφε for the RDS φε generated by (3.3.11). Then, Example 3.1.32 allows us to conclude the existence of a pullback attractor Aψ¯ ε for the RDS ψ¯ ε and by a simple scaling argument the existence of a pullback attractor Aψε for the RDS ψε given by the scaled stochastic equation. Proposition 3.3.6 (Random Pullback Attractor). In the space X the RDS φε generated by the problem in (3.3.11) possesses a random pullback attractor Aφε , which 1/2
belongs to the space Xε . Moreover, there exists a tempered random variable R, which does not depend on ε , such that # $ 1/2 Aφε (ω ) ⊂ v ∈ Xε : aε (v, v) + vLp+1 (D) ≤ R2 (ω ) , ω ∈ Ω . (3.3.24) p+1
We split the proof into several lemmata, which are also important for the limit transition on finite time intervals. Lemma 3.3.7 (Pullback Dissipativity). The RDS φε is pullback absorbing. The pullback-absorbing set B is given by the centered ball in X with square radius 0 ec0 τ 1 + Z(θτ ω )Lp+1 (Γ ) + Z(θτ ω )2H 1 (Γ ) d τ , R2 (ω ) = c1 −∞
p+1
with appropriate c0 > 0 and c1 > 0 independent of ε ∈ (0, 1]. This ball is also forward invariant and in D. Proof. The calculations below are formal, but can be justified by considering Galerkin approximations. Multiplying (3.3.11) by vi in L2 (Oi ) for i = 1, 2, we obtain that
1d 2 i i ε i vX + aε (v, v) = ∑ fi (v + Z)v dx + (li , v )L2 (Oi ) . 2 dt Oi i=1,2 From (3.3.3) we have that
( fi (vi + Z), vi ) =
Oi
fi (vi )vi dx +
Oi
0
1
fi (vi + λ Z)d λ Zvi dx
1 + |vi | p−1 + |Z| p−1 |Z||vi |dx + c2 O i a0 i p+1 ≤ − v L (O ) + b0 1 + ZLp+1 (Γ ) (3.3.25) p+1 i p+1 2 ≤
−a0 vi Lp+1 (O ) + c1 p+1 i
236
3 Stochastic Synchronization of Random Pullback Attractors
and from (3.3.1) and (3.3.14) we also have that
∑ (liε , vi )L2 (Oi ) ≤ C
i=1,2
ZH 1 (Γ ) +
∑
i=1,2
hi H 1 (Oi ) [aε (v, v)]1/2 .
(3.3.26)
Now, from (3.3.25)–(3.3.26), we obtain that
where
d v2X + aε (v, v) + a0 vLp+1 (D) ≤ R20 (θt ω ), p+1 dt
(3.3.27)
R20 (ω ) = c 1 + Z(ω )Lp+1 (Γ ) + Z(ω )2H 1 (Γ ) .
(3.3.28)
p+1
Since aε (v, v) ≥ c0 v2X + 12 aε (v, v), by differentiating eν∗ t v2X , taking into account (3.3.27) and integrating, we have that v(t)2X
t
+ 0
e−ν∗ (t−τ )Vε0 (v(τ ))d τ
≤
v0 2X e−ν∗ t
t
+ 0
e−ν∗ (t−τ ) R20 (θτ ω )d τ , (3.3.29)
for any ν∗ ≥ 0 sufficiently small. where R0 (ω ) is given by (3.3.28) and 1 Vε0 (v) = aε (v, v) + a0 vLp+1 (D) . p+1 2
(3.3.30)
This allows us to complete the proof of Lemma 3.3.7. Lemma 3.3.8 (Compact Pullback Absorbing Set). For each ε ∈ (0, 1] there exists a compact, tempered and absorbing set. Proof. Multiplying (3.3.11) by vti in L2 (Oi ), we find that
Ψε t (v(t)) + vt (t)2X =
∑
i=1,2 Oi
fi (v + Z) − fi (v ) i
i
vti dxdy +
Oi
(3.3.31) liε vti dxdy,
where 2 1 Ψε (u) = aε (u, u) + ∑ 2 i=1
Oi
Πi (ui ) dxdy,
1/2
u = (u1 , u2 ) ∈ Xε .
(3.3.32)
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
237
Here, Πi (u) = − 0u fi (ξ )d ξ . It is clear from the assumptions concerning fi that fi (vi + Z) − fi (vi ) vti dxdy ∑ i=1,2 Oi
fi (vi + Z) − fi (vi )2 dxdy + 1 vt 2X 4 i=1,2 Oi ≤ c1 + c2 |v| p+1 dxdy + c3 |Z|Lp+1 (Γ ) + |Z|Lp∗p
≤c
∑
p+1
D
∗ (Γ )
1 + vt 2X , 4
where p∗ = 2(p + 1)/(3 − p). We also have that liε vti dxdy ≤ c1 + c2 Z2 2 + 1 vt 2X . D H (Γ ) 4 In particular, we have by the mean value theorem | fi (vi + Z) − fi (vi )|2 ≤| fi (ξ )|2 |Z|2 ≤2a1 (|vi | + |Z|)2p−2 |Z|2 + 2c|Z|2 for ξ between vi and vi + Z. Then, we apply the Young inequality |vi |2p−2 |Z|2 ≤
2(p − 1) i p+1 3 − p 2(p+1) |v | |Z| 3−p , + p+1 p+1
which gives the estimate. Therefore, from (3.3.31) we have that 1 Ψε t (v) + vt 2X 2
≤ c1 + c2 vLp+1 (D) + c3 Z2H 2 (Γ ) + |Z|Lp+1 (Γ ) + |Z|Lp∗p p+1
p+1
∗ (Γ )
,
Consequently, choosing positive constants b0 and b1 in an appropriate way we can see that Vε (u) := b0 u2X + Ψε (u) + b1 with Ψε given by (3.3.32) satisfies the relations
c0Vε0 (v) ≤ Vε (v) ≤ c1 1 +Vε0 (v)
(3.3.33)
(3.3.34)
where Vε0 (v) given by (3.3.30). Moreover, owing to (3.3.27), we can choose b0 and b1 such that d 1 Vε (v) + γ Vε (v) + vε t 2X ≤ R21 (θt ω ), dt 2
(3.3.35)
238
3 Stochastic Synchronization of Random Pullback Attractors
with positive γ , where R21 (ω ) = c 1 + Z(ω )Lp+1 (Γ ) + |Z(ω )|Lp∗p
∗
p+1
2 + Z( ω ) 2 H (Γ ) , (Γ )
We note that R1 (ω ) is a tempered random variable. From (3.3.35), we have that Vε (v(t)) ≤ e−γ (t−s)Vε (v(s)) +
t s
e−γ (t−τ ) R21 (θτ ω )d τ ,
t ≥ s.
(3.3.36)
By (3.3.34), we also have Vε0 (v(t)) ≤ c1 e−γ (t−s) (1 +Vε0 (v(s))) + c2
t s
e−γ (t−τ ) R21 (θτ ω )d τ ,
t ≥ s.
Therefore, using (3.3.29) after integration with respect to s over the interval [0,t], we obtain t c c 1 e−ν∗ (t−τ ) R22 (θτ ω )d τ + 1 , t > 0, Vε0 (v(t)) ≤ 1 v0 2X e−γ∗ t + c2 1 + t t γt 0 (3.3.37) for some 0 < ν∗ ≤ γ and an appropriate tempered random variable R2 . Denote R2∗ (ω ) := 2c2
1 0
e−ν∗ (1−τ ) R22 (θτ ω )d τ +
c1 γ
Then, by the definition of Vε0 the set {ν : Vε0 (ν ) ≤ c1 sup v0 2X + R2∗ (ω )}
(3.3.38)
v0 ∈B(ω )
is relatively compact. Hence, X
C(ω ) := φε (1, θ−1 ω , B(θ−1 ω )) ⊂ B(ω ), where B is given in Lemma 3.3.7 is tempered, compact, and absorbing. Moreover, the tempered random variable R2∗ (ω ) does not depend on ε . Now we can apply Corollary 3.1.18. The inclusion follows for the calculations inside the last proof. Remark 3.3.9. It also follows from (3.3.35) and (3.3.29) that t 0
τ e−γ∗ (t−τ ) vε t (τ )2X d τ ≤ c1 v0 2X e−γ∗ t + c2
t 0
(1 + τ )e−γ∗ (t−τ ) R21 (θτ ω )d τ , (3.3.39)
for all t ≥ 0, where γ∗ > 0.
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
239
Below, we will also need the next lemma. Lemma 3.3.10. For any initial data v, v∗ ∈ X we have the estimate φε (t, ω , v) − φε (t, ω , v∗ )X ≤ c1 ec2 t v − v∗ X ,
ω ∈ Ω,
(3.3.40)
where c1 and c2 do not depend on ω ∈ Ω and ε ∈ (0, 1]. Proof. We use the same method as in Lemma 3.3.7 by considering the difference of two solutions and relying on the property fi (vi + Z) − fi (vi∗ + Z) (vi − vi∗ ) ≤ c0 |vi − vi∗ |2 , where c0 does not depend on ω and ε . The Limiting Case: The following assertion states the existence of a random pullback attractor for this RDS ψ0 . Proposition 3.3.11. In the space L2 (Γ ), the problem in (3.3.16) generates an RDS ψ0 possessing a random pullback attractor Aψ0 , which belongs to the space H 1 (Γ ). Moreover, under the assumptions on Q that the Markov family for ψ0 has a unique weakly attracting invariant measure, then the attractor Aψ0 is a singleton, i.e., Aψ0 (ω ) = {a0 (ω )}, where a0 (ω ) is a tempered random variable with values in L2 (Γ ), which is a stationary point. Proof. To prove the existence of the attractor we argue exactly as in Proposition 3.3.6 and we do not repeat it again. As for the second part, we first note that the RDS ψ0 is monotone, i.e., the property v(x) ≤ v∗ (x) for almost all x ∈ Γ implies that
ψ0 (t, ω , v)[x] ≤ ψ0 (t, ω , v∗ )[x],
for almost all
x ∈Γ,
for all t > 0 and ω ∈ Ω . This monotonicity property can be established by the standard (pathwise) argument (see, for example, Smith [93]). Our next step is to apply Theorem 3.1.38, which states that, under some conditions, the global pullback attractor of a monotone RDS consists of a single stationary point. The main hypothesis in this theorem is the weak convergence of distributions of the process t → ψ0 (t, ω , v) to some limiting probability measure. We refer the reader to Example 3.1.39 for details. Proposition 3.3.6 and Proposition 3.3.11 imply Theorem 3.3.3(2). Remark 3.3.12. Although it is possible to prove that the RDS ψ¯ ε generated by the problem in (3.3.11)–(3.3.13) is also monotone, we cannot apply the result from [52] to prove that Aψ¯ ε is a stationary point. The point is that the Brownian ω is sufficiently regular in L2 (Γ ) (the phase space of the system ψ0 ) for a unique invariant measure of the Markov semigroup, but it is degenerate in X = L2 (O) (the phase space for ψ¯ ε ) and hence we cannot guarantee the weak convergence of distributions of the process t → ψ¯ ε (t, ω ,U0 ). Thus, the random pullback attractor may contain more than one stationary point.
240
3 Stochastic Synchronization of Random Pullback Attractors
Remark 3.3.13. 1. It is clear from the argument in the proof of Lemma 3.3.10 that ψ0 (t, ω , v) − ψ0 (t, ω , v∗ )L2 (Γ ) ≤ c1 ec2 t v − v∗ L2 (Γ ) ,
ω ∈ Ω,
(3.3.41)
for some constants c1 and c2 independent of ω , where v, v∗ ∈ X. 2. Since L2 (Γ ) can be embedded naturally into L2 (D) ∼ X as the subspace of functions independent of y, we can consider the cocycle φ0 as a mapping from L2 (Γ ) into X. Therefore, we can compare it with φε . Below, we also consider the image ˜ φ (ω ) of Aφ (ω ) under this embedding. A 0 0
3.3.4 Limit Transition on Finite Time Intervals The following theorem, implies the first statement in Theorem 3.3.3. In particular, we derive a similar property for the scaled system φε . Then, considering the conjugate system ψ¯ ε , we obtain the same convergence because the transforms T, T −1 are independent of ε . Theorem 3.3.14. For any time interval we have that lim sup φε (t, ω , v) − φ0 (t, ω , v∗ )X = 0,
ε →0 t∈[δ ,T ]
where v∗ = v :=
1 1 2 −1 v(x, y)dy.
∀δ ∈ (0, T ),
(3.3.42)
If v does not depend on y, i.e., v = v∗ , then
lim sup φε (t, ω , v) − φ0 (t, ω , v∗ )X = 0.
(3.3.43)
ε →0 t∈[0,T ]
Proof. Let wiε (t) := φε (t, ω , vi ). It follows from (3.3.29),(3.3.37) and (3.3.39) that sup
∑
t∈[0,T ] i=1,2
wiε (t)2L2 (Oi ) +
T 0
wiε (t)2H 1 (Oi ) dt ≤ CT (ω ),
(3.3.44)
and, for every δ > 0, sup
∑
t∈[δ ,T ] i=1,2
1 ε2
wiε (t)2H 1 (Oi ) +
∑
T
i=1,2 δ
wiε t (t)2L2 (Oi ) dt ≤ CT,δ (ω ),
. sup
∑
t∈[δ ,T ] i=1,2
∂y wiε (t)2L2 (Oi ) +
∑
T
i=1,2 0
(3.3.45)
/ ∂y wiε (t)2L2 (Oi ) dt
≤ CT,δ (ω ). (3.3.46)
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
241
Moreover, we have
sup
t∈[δ ,T ] Γ
k(x , ε ) 1 |wε (t, x , 0) − w2ε (t, x , 0)|2 dx ε +
T k(x , ε )
dt
0
Γ
ε
|w1ε (t, x , 0) − w2ε (t, x , 0)|2 dx ≤ CT,δ (ω ), (3.3.47)
for all intervals [0, T ] and ε ∈ (0, 1]. Indeed the estimates of the bilinear form aε are bounded independently of ε . Therefore, using the relations in (3.3.44)–(3.3.46) and the Aubin–Dubinski–Lions compactness theorem (see also Chepyshov/Vishik [28, p. 32]), we can conclude that there exists a pair of functions ui ∈ C([δ , T ], L2 (Γ )) ∩ L∞ (δ , T, H 1 (Γ )), i = 1, 2,
∀δ > 0
and a sequence {εn } such that lim
n→∞
∑
sup wiεn (t) − ui (t)L2 (Oi ) = 0.
(3.3.48)
i=1,2 t∈[δ ,T ]
Moreover, we also have weak convergence in L2 (0, T, H 1 (D)). We can also see from (3.3.47) and (3.3.4) that u1 (t) = u2 (t) = u(t) on the set Γ . Considering a variational form of the equations in (3.3.11), we can show that u(t) solves the problem in (3.3.16). The corresponding argument is exactly the same as in Rekalo/Chueshov[137] for the deterministic case and therefore we do not give details here. Thus, (3.3.42) follows from (3.3.48) and from the uniqueness theorem for (3.3.16). To prove (3.3.43), we first consider v = v∗ from the space H 1 (Γ ) ∩ L p+1 (Γ ). In this case, relying on (3.3.36) with s = 0 and using the fact that Vε (v) does not depend on ε for this choice of v, we can easily prove estimates (3.3.45) and (3.3.46) with δ = 0. Thus, the same argument as above gives (3.3.43) for v = v∗ from H 1 (Γ ) ∩ L p+1 (Γ ). To obtain (3.3.43) for v∗ ∈ L2 (Γ ) we use an appropriate approximation procedure and relations (3.3.40) and (3.3.41). Remark 3.3.15. By a standard argument, we can prove that (3.3.42) and (3.3.43) holds uniformly with respect to v in every compact set. Remark 3.3.16. Since the arguments given in Lemma 3.3.7 and Lemma 3.3.8 do not depend on the behavior of k(x, ε ) as ε → 0, the estimates in (3.3.44)–(3.3.47) hold for both cases (3.3.4) and (3.3.22). Thus, in the latter case, we can also conclude from (3.3.44)–(3.3.46) that w1ε and w2ε converge to some functions u1 and u2 defined on Γ . However, in that case we cannot prove that u1 and u2 are the same because under the condition in (3.3.22) the estimate (3.3.47) does not lead to the conclusion. In the case in (3.3.22), the same arguments as in Chueshov/Rekalo [136, 137] give us the convergence of ψ¯ ε (t, ω ) generated by (3.3.1) to the cocycle generated by (3.3.23).
242
3 Stochastic Synchronization of Random Pullback Attractors
3.3.5 Upper Semicontinuity of Attractors Now, we prove the following assertion, which, in fact, is a result of synchronization. Theorem 3.3.17. Let Aφε be the global random pullback attractor for the RDS φε generated by (3.3.11). Then, φ (ω ) = 0, ∀ω ∈ Ω , (3.3.49) lim dX Aφε (ω ), A 0 ε →0
φ (ω ) = J(v) : v ∈ Aφ (ω ) ⊂ X. Here, Aφ (ω ) is the random pullback where A 0 0 0 attractor for the RDS φ0 and J : L2 (Γ ) → L2 (D) = X is the natural embedding operator. Proof. Assume that (3.3.49) does not hold for some ω ∈ Ω . Then, there exist a sequence {εn } with εn → 0 and a sequence un ∈ Aφεn (ω ) such that φ (ω )) ≥ δ > 0 for all dX (un , A 0
n = 1, 2, . . .
(3.3.50)
By the invariance property of the attractor Aφεn (ω ), for every t > 0, there exists vtn ∈ Aφεn (θ−t ω ) such that un = φεn (t, θ−t ω , vtn ). Since Aφεn (ω ) is compact and the 1/2
estimate in (3.3.24) holds, we can assume that there exists u∗ and vt∗ in Xε that lim un − u∗ X = 0,
n→∞
lim vtn − vt∗ X = 0.
n→∞
such
(3.3.51)
As in the proof of Theorem 3.3.14, we can see that u∗ = u˜ + u, ˜
vt∗ = v˜t + v˜t ,
where u, ˜ v˜t ∈ H 1 (Γ ). Therefore, if we show that u˜ ∈ Aφ0 (ω ), then we obtain a contradiction to (3.3.50). It follows from Lemma 3.3.10 and Theorem 3.3.14 that u˜ = φ0 (t, θ−t ω , v˜t ). However, it follows from the fact that the radii R(ω ) are independent of ε in (3.3.38) and (3.3.51) that v˜t ∈ B0 (θ−t ω ), where # $ ˜ ω) , B0 (ω ) = v ∈ H 1 (Γ ) : vH 1 (Γ ) ≤ R( ˜ ω ) is a tempered random variable. Thus, we have that where R( u˜ ∈ φ0 (t, θ−t ω , B0 (θ−t ω )) for every t > 0.
3.3 A Case Study: Stochastic Reaction–Diffusion in a Thin Two-Layer Domain
243
Since φ0 (t, θ−t ω , B0 (θ−t ω )) → Aφ0 (ω ) as t → ∞, this implies that u˜ ∈ Aφ0 (ω ). See also Theorem 3.1.25. Now, Theorem 3.3.3(3) follows from Theorem 3.3.17.
3.3.6 Synchronization for Fixed ε > 0 Now, we consider the case when the equations are the same in both domains, i.e., we assume that relations (3.3.18), (3.3.19) and (3.3.20) hold. Under the conditions in (3.3.18), the cocycle φε has a forward invariant subspace L in X consisting of functions that are independent of the variable y, i.e., L = {u(x, y) ∈ L2 (D) : u(x, y) = u(x, 0) = v ∈ L2 (Γ )} This subspace is independent of ω . It is clear that φε (t, ω , L ) ⊂ L and φε (t, ω ) = φ0 (t, ω ) on L . Theorem 3.3.18. Under the conditions in (3.3.18), (3.3.19) and (3.3.20), there exists ε0 > 0 such that for all ε ∈ (0, ε0 ] the random pullback attractor Aφε for φε has the form φ (ω ) = J(v) : v ∈ Aφ (ω ) ⊂ X, Aφε (ω ) := A (3.3.52) 0 0 where J : L2 (Γ ) → L2 (D) = X is the natural embedding operator and Aφ0 is the random pullback attractor for the RDS φ0 . Proof. Let π1 be orthoprojector in X onto L . This operator has the form 1 (π1 u)(x, y) = 2
1 −1
u(x, ξ )d ξ ,
u ∈ X ∼ L2 (O).
Let π2 = 1 − π1 . Both of the operators π1 and π2 map the domain D(Aε ) of the operator Aε into itself and commute with Aε . Let F(v, ω ) = f (v + Z(ω )). Therefore, it follows for a solution of (3.3.15) that π2 vε satisfies the equation d π2 vε + Aε π2 vε = π2 F(vε , θt ω ), dt
π2 v|t=0 = π2 v0 .
(3.3.53)
Multiplying this equation by π2 vε , we obtain 1d π2 vε 2X + aε (π2 vε , π2 vε )X = (π2 F(vε , θt ω ), π2 vε )X . 2 dt
(3.3.54)
244
3 Stochastic Synchronization of Random Pullback Attractors
From (3.3.19), we have that (π2 F(vε , θt ω ), π2 vε )X
1 1 = f (vε (x, ξ ) + Z)d ξ π2 vε (x, y)dxdy f (vε (x, y) + Z) − 2 −1 D 1 L |vε (x, y) − vε (x, ξ )| |π2 vε (x, y)|d ξ dxdy ≤ 2 D −1
1/2 1 1 L ≤√ dx dy d ξ |vε (x, y) − vε (x, ξ )|2 π2 vε X −1 −1 2 Γ 1 1 L 1 1 ≤√ dx dy d ξ |vε (x, y) − vε (x, η )d η 2 −1 −1 −1 2 Γ
1/2 1 1 2 − vε (x, ξ ) + vε (x, η )d η | π2 vε X 2 −1
1/2 1 1 L 2 ≤√ dx dy d ξ |π2 vε (x, y)| −1 −1 2 Γ
1/2 1 1 L +√ dx dy d ξ |π2 vε (x, ξ )|2 π2 vε X . −1 −1 2 Γ We easily arrive at the relation (π2 F(vε , θt ω ), π2 vε )X ≤ 2Lπ2 vε 2X . Thus, from (3.3.54), we obtain that 1 d π2 vε 2X + aε (π2 vε , π2 vε )X ≤ 2Lπ2 vε 2X . 2 dt Lemma 1.5.11 implies that there exists ε0 > 0 such that d π2 vε 2X + γ0 π2 vε 2X ≤ 0 dt for all 0 < ε ≤ ε0 and for some γ0 > 0. Therefore, π2 vε (t)2X ≤ π2 vε (0)2X e−γ0 t ,
t ≥0
This implies that the subspace L attracts all tempered sets (in the both forward and pullback sense) with exponential (deterministic) speed. Since φε (t, ω ) = φ0 (t, ω ) on L , this implies (3.3.52). Theorem 3.3.18 implies Theorem 3.3.3(4).
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
245
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model This section is a stochastic analog of Sect. 1.6 devoted to the synchronization of second order in time models. However, to present the main ideas we concentrate on an important special case of N interacting subsystems. More precisely, our main object of interest is the following model of N coupled sine-Gordon equations of the form dut1 + (−νΔ u1 + γ ut1 + κ(u1 − u2 ))dt = (−l1 sin(u1 + α1 ) + h1 (x))dt + d ω1 , dutj + (−νΔ u j + γ utj − κ(u j+1 − 2u j + u j−1 ))dt =(−l j sin(u j + α j ) + h j (x))dt + d ω j , j = 2, . . . , N − 1, dutN
+ (−νΔ u
N
+ γ utN
+ κ(u − u N
N−1
))dt = (−lN sin(uN + αN ) + hN (x))dt + d ωN , (3.4.1)
on a smooth bounded domain O ⊂ Rd with the Neumann boundary conditions
∂ u j = 0, ∂ n ∂O
j = 1, . . . , N.
and initial data u j (0) = u0j , utj (0) = u1j , j = 1, . . . , N, Here, U = (u1 , . . . , uN ) is an unknown function, ν > 0, γ , κ ≥ 0, λ j > 0, and α j are real parameters, ω j are trace class Brownian motions, and h j are given functions. As a particular case, we can consider the model of two coupled equations: dut + (γ ut − Δ u + κ(u − v))dt = (−l1 sin u + h1 (x))dt + d ω1 , dvt + (γ vt − Δ v + κ(v − u))dt = (−l2 sin v) + h2 (x))dt + d ω2 , in a smooth bounded domain O ⊂ Rd with the Neumann boundary conditions
∂ u ∂ v = 0, = 0. ∂ n ∂O ∂ n ∂O Our goal is to study the long-time dynamics of the RDS generated by the system (3.4.1). Under certain hypotheses, we first prove the existence of a random pullback attractor and study its dependence on the interaction parameter κ. Then, we apply these results to analyze various synchronization phenomena, which we understand at the level of attractors, i.e., in the synchronized regime the attractor of a coupled system becomes “diagonal” in some sense. We also discuss briefly the possibility of synchronization in infinite-dimensional systems by means of finite-dimensional interaction operators.
246
3 Stochastic Synchronization of Random Pullback Attractors
This section is based on the paper Chueshov/Kloeden/Yang [59]. As in the deterministic case, we make extensive use of the quasi-stability method. In the stochastic case, this approach was applied earlier in a study by Chueshov/Schmalfuss [55] of a stochastic fluid-structure interaction model. We also note that different aspects of pullback random dynamics in a single sineGordon equation dut + (γ ut − Δ u)dt = (−l sin u + h(x))dt + d ω , with different types of boundary conditions have been studied before. We mention the papers Fan [72, 73] and Fan/Wang [74], which deal with a simpler case of Dirichlet boundary conditions, whereas Neumann boundary conditions are considered in Shen/Zhou/Shen [151]. However, in all these cases the stochastic noise is finitedimensional and smooth in its spatial variables. Synchronized random dynamics in the finite-dimensional version of (3.4.1) were considered in Shen/Zhou/Shen [152]. An additional difficulty that impacts on the description of random dynamics in the model (3.4.1) is the degeneracy of the linear part of the problem owing to the Neumann boundary conditions. We also refer to the papers by Barbu/Da Prato [9], Carmona/Nualart [22], Dalang/ Frangos [66], Da Prato/Zabczyk [65], Millet/Morien [114], Millet/SanzSol´e [115] Quer-Sardanyons/Sanz-Sol´e [134], which discuss different aspects of stochastic wave models.
3.4.1 Abstract Model and Main Hypotheses The model in (3.4.1) is a particular case of the following system of equations in a Hilbert space H: ˜ 1 + Dut1 + κK(u1 − u2 ))dt = F˜1 (u1 )dt + d ω1 , dut1 + (ν Bu ˜ j + Dutj − κK(u j+1 − 2u j + u j−1 )))dt = F˜ j (u j )dt + d ω j , dutj + (ν Bu j = 2, . . . , N − 1, N ˜ N + DutN + κK(uN − uN−1 ))dt = F˜N (uN )dt + d ωN , dut + (ν Bu
(3.4.2)
equipped with the initial data u j (0) = u0j , utj (0) = u1j , j = 1, . . . , N. To obtain (3.4.1) from (3.4.2), we need to set H = L2 (O), B˜ = −Δ with the smooth bounded domain % ∂ u 2 ˜ D(B) = u ∈ H (O) : =0 , ∂ n ∂Ω and also to take D = γ id, γ > 0, K = id and F˜i (u) = −li sin(u + αi ).
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
247
For a general model, we impose the following hypotheses. Assumption 3.4.1. ˜ in a 1. B˜ is a self-adjoint nonnegative operator densely defined on a domain D(B) separable Hilbert space H. We assume that dim Ker B˜ = 1 and the resolvent of B˜ is compact in H, which implies that there is an orthonormal basis {ek }∞ k=0 in H consisting of the eigenvectors of the operator B˜ : ˜ k = λk ek , Be
0 = λ0 < λ1 ≤ λ2 ≤ · · · ,
lim λk = ∞.
k→∞
We denote by · and (·, ·) the norm and the inner product in H. We also denote by H s (with s > 0) the domain D(B˜ s ) equipped with the graph norm · s = ˜ s · . H −s denotes the completion of H with respect to the norm · −s (id + B) ˜ −s ·. The symbol (·, ·) denotes not only the scalar product but also the = (id+ B) duality between H s and H −s . Below, we also use the notation X s = H s × . . . × H s . The norm in X s is denoted by the same symbol as in H s . We write X 0 = X. 2. The damping operator D is a linear positive self-adjoint operator and D : H 1/2 → H −1/2 is a bounded mapping. In particular, there exist c1 , c2 > 0 such that c1 u2 ≤ (Du, u) ≤ c2 B˜ 1/2 u2 , u ∈ H 1/2 . 3. The interaction operator K is a linear positive self-adjoint operator and K : H 1/2 → H. 4. The nonlinear operators F˜i : H 1/2 → H are bounded and (globally) Lipschitz, i.e., there exist constants MF˜ and LF˜ such that F˜i (u) ≤ MF˜ , F˜i (u) − F˜i (v) ≤ LF˜ u − v1/2 , i = 1, . . . , N for all u, v ∈ H 1/2 . In addition, we assume that F˜i are d-periodic in the direction4 ˜ 0 = 0 and e0 = 1): e0 (recall that Be ∀ u ∈ H 1/2 : F˜i (u + dn · e0 ) = F˜i (u), n = 0, ±1, ±2, . . . 5. ωi is a two-sided continuous trace-class Brownian motion on H with covariance operator Qi , i.e., with trH Qi < ∞, i = 1, . . ., N. The system in (3.4.2) can be written as a single equation ˜ dUt +(ν BU +DUt +κK U)dt = F(U)dt +d ω , U(0) = U0 , Ut (0) = U1 , (3.4.3)
In the case of the coupled sine-Gordon system (3.4.1) we have e0 = (Vol(O))−1/2 and d = 2π (Vol(O))1/2 .
4
248
3 Stochastic Synchronization of Random Pullback Attractors
˜ D = diag (1, . . . , 1)D and with B = diag (1, . . . , 1) · B, ⎞ ⎛ 1 −1 0 . . . 0 ⎜−1 2 −1 . . . 0⎟ ⎟ ⎜ ⎟ ⎜ K = ⎜ 0 −1 2 . . . 0⎟ K ⎜ .. .. .. . . .. ⎟ ⎝ . . . . .⎠ 0 0 0 ... 1 ˜ as well as F(U) = (F˜1 (u1 ), . . . , F˜N (uN )) and U = (u1 , . . . , uN ).
3.4.2 Ornstein–Uhlenbeck Processes Generated by Second-Order Equations Consider the following linear stochastic PDE ˜ + z + Dzt )dt = d ωi , dzt + (ν Bz where ωi is a Brownian motion on H with covariance operator Qi of trace class: trH Oi < ∞, i = 1, . . ., N, as in Assumption 3.4.1 (5). For this equation we can write z1 z 0 −id 0 d 1 + dt = d , dz1 = z2 dt. z2 ν B˜ + id D z2 ωi Similar to the proof of Chueshov/Lasiecka [46, Theorem 1.5] by a dissipativity argument, this operator generates a strongly continuous semigroup S in H 1/2 × H S(t)(u0 , u1 ) = (u(t), ut (t)), where u(t) solves the Cauchy problem ˜ + u + Dut = 0, u|t=0 = u0 , ut |t=0 = u1 . utt + ν Bu This semigroup is exponentially stable. Following Da Prato/Zabczyk [65, Chapter 5], we obtain an Ornstein–Uhlenbeck process generated by S and ωi . Then we can apply Theorem 3.1.28 to obtain a stationary Ornstein–Uhlenbeck process. We consider the noise ω = (ω1 , · · · , ωN ) (see Assumption 3.4.1 (5)). We also use the notation that Z = (Z 1 , . . . , Z N ) is the stationary Ornstein–Uhlenbeck process generated by the semigroup S and the noise ωi . This variable Z solves the equation dZt + (ν BZ + Z + DZt )dt = d ω , where (Z(ω ), Zt (ω )) ∈ X := X 1/2 × X.
(3.4.4)
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
249
3.4.3 Random Evolution Equation Introducing new variables vi = ui − Z i (θt ω ) in (3.4.3), we obtain the random evolution equation ˜ + Z(θt ω )) + ξ (θt ω ) Vtt + ν BV + DVt + κK V = F(V ξ (ω ) :=Z(ω ) − κK Z(ω )
(3.4.5)
for V = (v1 , . . . , vN ), with the initial data V (0) = V0 := U0 − Z(ω ), Vt (0) = V1 := U1 − Zt (ω ). Theorem 3.4.2. Let T > 0 be arbitrary. Under the Assumption 3.4.1 for every (V0 ,V1 ) ∈ X there exists a unique mild solution of (3.4.5) such that (V,Vt ) ∈ C([0, T ], X ), This solution possesses the property D 1/2Vt ∈ L2 (0, T, X) and satisfies the energy relation E (V (t),Vt (t)) +
t s
(DVt (τ ),Vt (τ ))d τ =
t s
˜ (τ ) + Z(θτ ω )) − ξ (τ ),Vt (τ ))d τ (F(V
+ E (V (s),Vt (s)).
(3.4.6)
where the energy E is defined by N
E (V0 ,V1 ) = ∑ E(vi0 , vi1 ) + Eint (V0 ), i=1
with V0 = (v10 , . . . , vN0 ), V1 = (v11 , . . . , vN1 ). Here, E(u0 , u1 ) = and Eint (V0 ) =
1 u1 2 + ν B˜ 1/2 u0 2 2
κ κ N−1 1/2 j+1 (K V0 ,V0 ) = ∑ K (v0 − v0j )2 . 2 2 j=1
In addition, the following inequality holds: E (V (t),Vt (t))+
1 2
t s
(DVt (τ ),Vt (τ ))d τ ≤ E (V (s),Vt (s)) + b
t s
1 + ξ (τ )2 d τ , (3.4.7)
250
3 Stochastic Synchronization of Random Pullback Attractors
where b is a deterministic constant independent of κ. Moreover, if V 1 and V 2 are solutions with different initial data and Δ V = V 1 −V 2 , then Δ Vt (t)2 + ν Δ V (t)21/2 + κK 1/2 Δ V (t)2 ≤ Δ Vt (s)2 + ν Δ V (s)21/2 + κK
1/2
Δ V (s)2 ea(t−s)
(3.4.8)
for all t > s ≥ 0, where a is a deterministic constant independent of κ. Proof. Since the nonlinearity is bounded and globally Lipschitz, we can apply the standard deterministic arguments, see, for example, Pazy [127, Chapter 5]. The energy relation in (3.4.6) follows from (3.4.5) by multiplication by Vt . To prove (3.4.7) we note that ˜ + Z) + ξ ,Vt )| ≤ (MF˜ + ξ )Vt ≤ |(F(V
C (1 + ξ 2 ) + ε Vt 2 ε
for every ε > 0. Thus, choosing ε small enough and using Assumption 3.4.1(2), we obtain (3.4.7). To prove (3.4.8) we note that Δ V solves the problem
Δ Vtt + ν (B + id)Δ V + D Δ Vt + κK Δ V = Δ F,
(3.4.9)
˜ 2 + Z) − F(V ˜ 1 + Z) + νΔ V . We obviously have for an appropriate where Δ F = F(V c > 0 by (3.4.1) that 1 |(Δ F, Δ Vt )| ≤ (ν + LF˜ )Δ V 1/2 Δ Vt ≤ (D Δ Vt , Δ Vt ) + cΔ V 21/2 . 2 Therefore, from the energy relation for (3.4.9), we have that the function
Ψ (t) = Δ Vt (t)2 + ν Δ V (t)21/2 + κK
1/2
Δ V (t)2
satisfies the inequality
Ψ (t) ≤ Ψ (s) +
t s
Ψ (τ )d τ , t > s.
This implies (3.4.8). Remark 3.4.3. When the nonlinearity is a potential operator, there is another form of the energy relation. Let F˜i (u) be the Frech´et derivative of a functional Πi (u) on H 1/2 , i.e., F˜i (u) = −Πi (u) (see (1.3.18)). We define N
E(V,Vt ) := E (V,Vt ) + ∑ Π j (v j + Z j ). j=1
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
251
We can show t
E(V (t),Vt (t)) + −
t s
s
(DVt (τ ),Vt (τ ))d τ
˜ (τ ) + Z(τ )), Zt (τ )) + (ξ (τ ),Vt (τ )) d τ = E(V (s),Vt (s)). (F(V
This energy relation involves both components of the Ornstein–Uhlenbeck process Z given in (3.4.4). By Theorem 3.4.2, the problem in (3.4.5) generates an RDS φ in the space X := X 1/2 × X = [H 1/2 ]N × H N defined by the relation
φ (t, ω , (V0 ,V1 )) = (V (t),Vt (t)), where V (t) is a solution to the problem in (3.4.5). By the structure of K , the cocycle φ possesses the following symmetry property √ √ φ (t, ω , (V0 + d Nmψ0 ,V1 )) = φ (t, ω , (V0 ,V1 )) + (dm N ψ0 , 0), m = ±1, ±2, . . . , √ where ψ0 = (e0 , . . . , e0 )/ N, and e0 is the normalized eigenvector of B˜ with the zero eigenvalue. In the same way as in Temam [161, Section IV.2] for a deterministic single sineGordon model, we can define the dynamics in the corresponding factor space. We first note that every element V from X 1/2 can written in the form V = v + ψ := [V − (V, ψ0 )ψ0 ] + (V, ψ0 )ψ0 =: Qψ0 V + Pψ0 V, where v ∈ X1/2 = {v ∈ X 1/2 : (v, ψ0 ) = 0} and ψ ∈ Lψ0 := {cψ0 : c ∈ R}. Therefore, using the symmetry above, we can correctly define evolution φ in the space √ 2 := X1/2 × Lψ /d NZ · ψ0 × X ∼ X = X1/2 × T1 × X 0 √ √ where T1 = R/d NZ ∼ = [0, d N) is the one-dimensional torus. To do this, we take (v0 , c0 , v1 ) from X1/2 × T1 × X. Then, we solve (3.4.5) with the initial data V0 = v0 + a0 ψ0 and V1 = v1 , where a0 is a representative for the factor element c0 ∈ T1 . Finally, using solution V (t), we define the RDS φ
φ(t, ω , (v0 , c0 , v1 )) =(V (t) − (V (t), ψ0 )ψ0 , {(V (t), ψ0 )},Vt (t)) =(Qψ0 V (t), {(V (t), ψ0 )},Vt (t)),
(3.4.10)
252
3 Stochastic Synchronization of Random Pullback Attractors
√ where {(V (t), ψ0 )} := (V (t), ψ0 )ψ0 mod d N is the element in T1 generated by (V (t), ψ0 ), i.e., # $ √ {(V (t), ψ0 )} = (V (t), ψ0 ) + md N : m = ±1, ±2, . . . . 2 is not a vector space, but it is a Polish space. The metric in X 2 can We note that X be defined by the relation dX (Y,Y ∗ ) = v0 − v∗0 1/2 + v1 − v∗1 + dT1 (c0 , c∗0 ),
(3.4.11)
2. The distance on where Y = (v0 , c0 , v1 ) and Y ∗ = (v∗0 , c∗0 , v∗1 ) are elements from X 1 the torus T can be defined as follows: dT1 (c0 , c∗0 ) = min {|a0 − a∗0 | : a0 ∈ c0 , a∗0 ∈ c∗0 } .
3.4.4 Global Random Attractors: Dissipativity In this section, we prove the existence of a global random attractor for the RDS φ and study its properties. We also note that for the synchronization phenomena it is important to have effective bounds for the attractor. Below, as in the deterministic case (see Sect. 1.6), we use an approach based on Lyapunov-type functions. We first introduce two orthogonal projectors P and Q: ⎛ ⎞ ⎛ 1⎞ 1 u ⎜.⎟ ⎜ . ⎟ N ⎜ ⎟ ⎜ ⎟ 1 i ⎟ ⎜ ⎟ PU := P ⎜ ⎜ . ⎟ = MU · I and Q = id − P with MU = N ∑ u , I = ⎜ . ⎟ ⎝.⎠ ⎝ . ⎠ i=1 N 1 u (3.4.12) in the space X. We note that the (partially) average operator M maps X into H. Theorem 3.4.4. Let Assumption 3.4.1 hold. Then, for every κ > 0 the RDS φ is 2. The absorbing set Bκ is given by pullback absorbing in X # $ Bκ (ω ) = Y = (v0 , c0 , v1 ) ∈ X1/2 × T1 × X : v0 21/2 + v1 2 ≤ R2κ (ω ) . In particular, R2κ (ω ) is a tempered nonnegative random variable. Moreover, there exists a forward invariant pullback absorbing random set B0κ (ω ) inside Bκ (ω ). If QZ = 0 (i.e., if the ωi are identical), then the radius Rκ (ω ) of the absorbing ball is independent of κ ≥ κ∗ > 0. More precisely, there exists a tempered random
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
253
variable R(ω ) independent of κ such that the set ! " v 2 + κK 1/2 v 2 0 0 1/2 B∗κ (ω ) = Y = (v0 , c0 , v1 ) ∈ X1/2 × T1 × X +v1 2 ≤ R2 (ω ) is pullback absorbing. Proof. We use a modification of the standard (deterministic) method (see, for example, Chueshov [34] or Temam [161]) based on Lyapunov-type functions. This modification is necessary because the operators B and K are degenerate. This leads to a double splitting procedure. We first use the Q and P components of a solution and then split the P-component using complete averaging type operator P0 (which is orthoprojector on e0 in H). Thus, using the projectors Q and P we can split the equation in (3.4.5) as follows ˜ + uI + Z) + Qξ in X3 := QX, vtt + ν Bv + Dvt + κK v = QF(v ˜ + Dut = M F(v ˜ + uI + Z) + M ξ in H. utt + ν Bu
(3.4.13)
We use here the facts that PK = K P = 0 and PD = DP. The pair (v, u) solves (3.4.13), if and only if V = v + uI solves (3.4.5). We also note that ! " N 1 N i X3 = QX = U := (u , . . . , u ) ∈ H × . . . × H : ∑ u = 0 . i=1
3 More precisely, there exist Lemma 3.4.5. The operator K is strictly positive on X. c0 , k0 > 0 such that N−1
N
i=1
i=1
(K v, v) =
∑ K 1/2 (vi+1 − vi )2 ≥ c0 ∑ K 1/2 vi 2 ≥ c0 k0 v2
(3.4.14)
3 for every v = (v1 , . . . , vN ) ∈ X. Proof. We have N−1
(K v, v) =
∑
K 1/2 (vi+1 − vi )2 =
i=1
∞ N−1
∑ ∑ |(K 1/2 vi+1 , em ) − (K 1/2 vi , em )|2 .
m=0 i=1
For every v ∈ X3 we obviously have ∑Ni=1 (K 1/2 vi , em ) = 0 for every m = 0, 1, . . .. Therefore, an application of a standard scalar result for finite Jacobi matrices gives π . The constant k0 is determined by the the estimate in (3.4.14) with c0 = 4 sin2 2N lower bound of the spectrum of K. Lemma 3.4.6. Let v satisfy the first equation of (3.4.13) and define E(t) = E(v, vt ) :=
1 vt (t)2 + ν B˜ 1/2 v(t)2 . 2
254
3 Stochastic Synchronization of Random Pullback Attractors
Then E(t) + κK
1/2
v(t)2 ≤C E(s) + κK t
+C s
1/2
v(s)2 e−γ (t−s)
(1 + Qξ (θτ ω )2 )e−γ (t−τ ) d τ ,
(3.4.15)
where the positive constants C and γ do not depend on κ ≥ κ∗ . Proof. We consider the functional Ψ1 (t) = E(t) + Φ1 (t), where
Φ1 (t) = ρ (v, vt ) +
κ (K v, v). 2
Using Lemma 3.4.5 we can see that there exist 0 < ρ0 < 1 and βi > 0 independent of κ ≥ κ∗ such that
(3.4.16) β0 E(t) + κK 1/2 v(t)2 ≤ Ψ1 (t) ≤ β2 E(t) + κK 1/2 v(t)2 for all ρ ∈ (0, ρ0 ]. Let us consider the first inequality κ (K 1/2 v, K 1/2 v) 2 ρ ρ κ ≥E(t) − v2 − vt 2 + (K 1/2 v, K 1/2 v) 2 2 2 ρ ρ κ 2 ≥E(t) − vt − (K 1/2 v, K 1/2 v) + (K 2 2k0 c0 2
E(t) + ρ (v, vt ) +
1/2
v, K
1/2
v)
by Lemma 3.4.5. Now, choosing at first ρ0 sufficiently small and then β0 > 0 sufficiently small we obtain the first inequality for every ρ < ρ0 . The second inequality follows similarly. Using the energy relation E(t) +
κ K 2
1/2
v(t)2
= E(s) +
κ K 2
1/2
v(s)2 −
t s
˜ + uI + Z) − ξ , vt )d τ (Dvt − F(v
for the first equation in (3.4.13), we calculate the derivative dΨ1 ˜ + uI + Z) + Qξ , vt ) = − (Dvt , vt ) + (F(v dt
˜ + uI + Z) + Qξ , v) . + ρ vt 2 − (Dvt , v) − (Bv, v) − κ(K v, v) + (F(v Since D = diag (1, . . . , 1)D is bounded from X 1/2 into X −1/2 , we obtain by the Assumption 3.4.1 (2) |(Dvt , v)| ≤ ε (Dv, v) +
1 1 (Dvt , vt ) ≤ c2 ε B1/2 v2 + v2 + (Dvt , vt ) 4ε 4ε
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
255
for all ε > 0. Then we obtain, applying Lemma 3.4.5, dΨ1 1 ≤ − (Dvt , vt ) + ε vt 2 + (MF˜ + Qξ )2 dt 4ε 1 + ρ vt 2 + ρε c2 B1/2 v + ρε c2 v − ρν B1/2 v2 − ρ κK 2 1 + ρ (MF˜ + Qξ )2 + ρε v2 + ρ cε −1 (Dvt , vt ). 4ε
1/2
v2
By Assumption 3.4.1 (2) we have c1 ρ κk0 dΨ1 2 v2 + ρ (−ν + ε c2 )B˜ 1/2 v2 ≤ − c1 − ε − vt + ρ ε (1 + c2 ) − dt 4ε 4 1+ρ ρκ − (MF˜ + Qξ )2 , (K v, v) + 2 ε Now, choosing for κ > κ∗ an ε > 0 such that κk0 ν 1 max ε (1 + c2 ) − , − + ε c2 ≤ − , 4 2 4
c1 − ε ≥
c1 . 2
After an appropriate choice of ρ we find that there exist b1 > 0 independent of κ ≥ κ∗ such that dΨ1 + b1Ψ1 ≤ c(1 + Qξ )2 . (3.4.17) dt by (3.4.16) and (3.4.17) for an appropriate c > 0. Moreover, it follows from that
Ψ1 (t) ≤ Ψ1 (0)e−γ t + c
t 0
(1 + Qξ (θτ ω )2 )e−γ (t−τ ) d τ , t ≥ s,
(3.4.18)
for some γ , c > 0. This implies the dissipativity property in (3.4.15), for instance, by applying Example 3.1.22 and a comparison argument. Now we deal with the second equation of (3.4.13). Lemma 3.4.7. Let u(t) satisfy the second equation of (3.4.13) and P0 be the orthoprojector in H on the eigenfunction e0 with Q0 = id − P0 . Let = ut (t)2 + ν B˜ 1/2 Q0 u(t)2 . E(t) Then, the dissipativity estimate −γ (t−s) ≤ c1 E(s)e E(t) + c2
t s
(1 + Q0 M ξ (τ )2 )e−γ (t−τ ) d τ
holds, where the positive constants ci and γ do not depend on κ ≥ 0.
(3.4.19)
256
3 Stochastic Synchronization of Random Pullback Attractors
Proof. The argument is almost the same as in the previous lemma. We note that the operator B˜ is not degenerate in the subspace Q0 H. This implies that B˜ 1/2 y2 ≤ y21/2 ≤ c0 B˜ 1/2 y2 , y ∈ Q0 H. + ρ (Q0 u, ut ), where, as above, ρ is a posiWe consider the functional Ψ2 (t) = E(t) tive constant, which will be chosen later. We can see that there exist 0 < ρ0 < 1 and βi > 0 independent of κ such that ≤ Ψ2 (t) ≤ β2 E(t) β0 E(t)
(3.4.20)
for all ρ ∈ (0, ρ0 ]. Using the corresponding energy relation, we calculate the derivative dΨ2 ˜ + uI + Z) + Q0 M ξ , ut ) = − (Dut , ut ) + (M F(v dt
˜ 0 u, Q0 u) + (M F(v ˜ + uI + Z) + Q0 M ξ , Q0 u) . + ρ Q0 ut 2 − (Dut , Q0 u) − ν (BQ
As above, |(Dut , Q0 u)| ≤ ε (DQ0 u, Q0 u) + cε −1 (Dut , ut ) ≤ c2 ε B˜ 1/2 Q0 u2 + cε −1 (Dut , ut ) for all ε > 0. Thus, after an appropriate choice of ε and ρ , we find that there exist bi > 0 independent of κ such that dΨ2 ≤ −b1 (Dut , ut ) − b2 B˜ 1/2 Q0 u2 + b3 (1 + Q0 M ξ (θτ ω )2 ). dt Using (3.4.20), we obtain that −γ (t−s)
Ψ2 (t) ≤ Ψ2 (s)e
t
+c s
(1 + Q0 M ξ (θτ ω )2 )e−γ (t−τ ) d τ , t ≥ s,
(3.4.21)
for some γ , c > 0. This implies the dissipativity property in (3.4.19). Now we can complete the proof of Theorem 3.4.4. Let N : X˜ → X1/2 × X˜ be defined by (v0 , v1 ). Let |(v0 , v1 )|2κ := v1 2 + B˜ 1/2 Qv0 2 + B˜ 1/2 Q0 Mv0 2 + κK
1/2
v0 2 .
Then, Lemmas 3.4.6 and 3.4.7 yield |N(φ(t, ω ,Y (ω ))|2κ ≤ |N(Y (ω ))|2κ e−γ t +
t 0
q(θτ ω )e−γ (t−τ ) d τ ,
where 2 and q(ω ) = c(1 + Qξ (ω )2 + Q0 M ξ (ω )2 ). Y ∈X
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
257
If Y belongs to a tempered bounded set, then, after change of the integration variable, we obtain |N(φ(t, θ−t ω ,Y (θ−t ω ))|2κ ≤ C(θ−t ω )e−γ t + c2
0 −t
q(θτ ω )eγτ d τ ,
where C(ω ) is a tempered random variable. This implies the first statement of Theorem 3.4.4 with R2 (ω ) =
0 −∞
c 1 + Qξ (θτ ω )2 + Q0 M ξ (θτ ω )2 eγτ d τ .
(3.4.22)
In the case when QZ = 0, we have Qξ = 0 and M ξ = MZ. Thus, R(ω ) does not depend on κ in this case. The existence of a forward invariant pullback absorbing set B0κ (ω ) for φ in Bκ (ω ) follows from (3.4.18) and (3.4.21). This set has the form
B0κ (ω ) = (v0 , c0 , v1 ) : |(v0 , v1 )|2κ + ρ (Qv0 , v1 ) + (Q0 Mv0 , Mv1 ) ≤ cR2 (ω ) , where ρ > 0 is some small parameter, c > is a deterministic constant, and R(ω ) is given by (3.4.22).
3.4.5 Global Random Attractors: Quasi-Stability An application of the quasi-stability method is based on the following assertion. Proposition 3.4.8. Let Assumption 3.4.1 hold and, in addition, assume that F˜i (u) are subcritical,5 i.e., ˜ σ0 (u1 − u2 ), ∃ σ0 < 1/2 : F˜i (u1 ) − F˜i (u2 ) ≤ LF˜ (id + B)
∀u1 , u2 ∈ H 1/2 . (3.4.23)
Let V 1 (t) and V 2 (t) be two solutions of (3.4.5) with (different) initial data (V01 ,V11 ) and (V02 ,V12 ). Let Δ V = V 1 −V 2 . Then there exist C, γ > 0 such that EΔ V (t) ≤ CEΔ V (0)e−γ t +C
t 0
e−γ (t−τ ) Δ V (τ )2 d τ ,
∀t > 0,
(3.4.24)
where EΔ V (t) =
1 Δ Vt (t)2 + ν B1/2 Δ V (t)2 + ν Δ V (t)2 + κK 2
The constants C and γ are deterministic and independent of κ.
5
In the case of the sine-Gordon model (3.4.1) we have σ0 = 0.
1/2
Δ V (t)2 .
258
3 Stochastic Synchronization of Random Pullback Attractors
Proof. As in the deterministic case (see Proposition 1.6.11) we use the same line of argument as in Chueshov/Lasiecka [46] and Chueshov [40]), which involves Lemma 3.23 from Chueshov/Lasiecka [46]. This argument uses the fact that B is not degenerate. Therefore, it is convenient to redefine the main part in ˜ + Z) the equation as ν B → ν (B + id) and introduce the modified nonlinearity F(V ˜ → F(V + Z) − ν V . Then, we obtain the following relation (which follows from Lemma 3.23 in Chueshov/Lasiecka [46]): T T EΔ V (t)dt ≤c ((D Δ Vt , Δ Vt )dt T EΔ V (T ) + 0 0 % T 1 2 |((D Δ Vt , Δ V )| dt + ΨT (V ,V ) + 0
for every T ≥ T0 ≥ 1, where c > 0 does not depend on α or κ, T , and T T ΨT (V 1 ,V 2 ) = (G(τ ), Δ Vt (τ ))d τ + (G(t), Δ V (t))dt 0 0 T T + dt (G(τ ), Δ Vt (τ ))d τ 0 t with ˜ 1 (t) + Z(θt ω )) − F(V ˜ 2 (t) + Z(θt ω )) − νΔ V (t). G(t) = F(V As above, |(D Δ Vt , Δ V )| ≤Cε (D Δ Vt , Δ Vt ) + ε EΔ V (t) for every ε > 0. Thus, choosing ε in an appropriate way, we obtain T
T EΔ V (T ) +
0
EΔ V (t)dt ≤ c0
T 0
(D Δ Vt , Δ Vt )dt + c0ΨT (V 1 ,V 2 ).
Using the subcritical hypothesis in (3.4.23) we can show that
ΨT (V 1 ,V 2 ) ≤ ε
T 0
EΔ V (τ )d τ + b(ε , T )
T 0
Δ V (τ )2 d τ
for every ε > 0. The difference Δ V (t) solves the problem
Δ Vtt + ν (B + id)Δ V + D Δ Vt + κK Δ V + G(t) = 0, Therefore, applying the energy relation for this equation we obtain T 0
(D Δ Vt , Δ Vt )dt ≤ EΔ V (0) − EΔ V (T ) + ΨT (V 1 ,V 2 ).
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
259
Thus, after an appropriate choice of ε and T , we arrive at the relation EΔ V (T ) ≤ qEΔ V (0) + b(T )
T 0
Δ V (τ )2 d τ ,
where q < 1. This inequality allows us to apply the same procedure as in Chueshov/Lasiecka [46, p. 100] to obtain (3.4.24). By the definition (3.4.10) of the cocycle φ from (3.4.11) we have that dX2 (φ(t, ω ,Y ), φ(t, ω ,Y ∗ )) = C V (t) −V ∗ (t)1/2 + Vt (t) −Vt∗ (t) , where V (t) and V ∗ (t) are solutions to (3.4.5) which correspond to the cocycle φ applied to the initial data Y = (v0 , c0 , v1 ) and Y ∗ = (v∗0 , c∗0 , v∗1 ). On the other hand, V (t) −V ∗ (t) ≤ Qψ0 V (t) − Qψ0 V ∗ (t) + |(V (t), ψ0 ) − (V ∗ (t), ψ0 )| and |(V (t), ψ0 ) − (V ∗ (t), ψ0 )| ≤ |a0 − a∗0 | +
t 0
|(Vt (τ ) −Vt∗ (τ ), ψ0 )|d τ
where a0 = (V (t), ψ0 ) and a∗0 = (V ∗ (t), ψ0 ). Thus, if we choose the representatives a0 and a∗0 of the factor-elements c0 and c∗0 such that |a0 − a∗0 | = dT1 (c0 , c∗0 ), then it follows from Proposition 3.4.8 that dX2 (φ(t, ω ,Y ), φ(t, ω ,Y ∗ )) ≤ dX2 (Y,Y ∗ )e−β t +Cρ t (Y,Y ∗ )
(3.4.25)
2, where C, β > 0 are deterfor every Y = (v0 , c0 , v1 ) and Y ∗ = (v∗0 , c∗0 , v∗1 ) from X ministic constants and
ρ t (Y,Y ∗ ) = dT1 (c0 , c∗0 ) t 1/2 + e−β (t−τ ) Qψ0 [V (τ ) −V ∗ (τ )]2 + |(Vt (τ ) − (Vt∗ (τ ), ψ0 )|2 d τ . 0
(3.4.26) Theorem 3.4.9. Let Assumption 3.4.1 and the property in (3.4.23) hold. Then, for 2 has a random attractor Aκ . any κ > 0 the RDS φ on the space X Proof. We apply the quasi-stability method in the form presented in the paper Chueshov/Schmalfuss [55] for random systems. However, it is worth mentioning that in contrast with Chueshov/Schmalfuss [55], the state space is metric (not a Banach space) in the case considered here. This gives us additional difficulties in the realization of the quasi-stability method. First of all, we have to apply the idea developed by Ceron and Lopes for metric spaces (see, for example, Proposition 2.2.21
260
3 Stochastic Synchronization of Random Pullback Attractors
in Chueshov [43]) to prove the existence of a random pullback attractor. Therefore, to prove the existence of a random pullback attractor we need to prove that this RDS is asymptotically compact. For this we show that C(ω ) =
φ(t, θ−t ω , B0κ (θ−t ω ))
t≥0
is a nonempty random pullback attracting compact set. First we note that the pseudo-metric ρ t given by (3.4.26) is compact on B0κ (ω ) (as in Chueshov/Schmalfuss [55]). Indeed, we obtain the estimate sup
sup
t∈[0,T ] φ0 ∈B0κ (ω )
φ˜ (t, ω , φ0 ) < ∞,
where by Theorem 3.4.4 the system φ is dissipative with a forward invariant and pullback absorbing set B0κ . In particular, the time derivative component of φ is bounded in the above sense. Now we can apply Chepyshov/Vishik [28, Theorem II.1.5] that states that the set of functions V (t) with φ0 ∈ B0κ (ω ) is relatively compact. In addition, the set {(Vt (0), ψ ) : Vt (0) ∈ B0κ (ω )} is relatively compact. Therefore, for every ε , there exist a covering of B0κ (ω ) by open ε -balls K1 , · · · , Km in the ρ T -pseudo-metric. Let C j be a covering of B0κ (ω ) with the closed sets of diameter diamX2(C j ) less than α (B0κ (ω ))+ ε , where α (B) denotes the Kuratowski α -measure of noncompactness of B (see, for example, Chueshov [40, p. 54]). It follows from the quasi-stability estimate in (3.4.25) that diamX2 (φ(t, ω , Ki ∩C j )) ≤ Ce−β t diamX2 B0κ (ω ) + 2Cε . Therefore,
α (φ(t, ω , B0κ (ω ))) ≤ Ce−β t diamX2 B0κ (ω ). Thus, substituting ω by θ−t ω and using the temperedness of diamX2 B0κ (ω ) we conclude that
α (φ(t, θ−t ω , B0κ (θ−t ω ))) → 0 as t → +∞, ∀ ω ∈ Ω . Therefore, C(ω ) is a nonempty compact set. By the contradiction argument it is easy to see that this set pullback attracts every tempered set. Thus, we can apply Theorem 3.1.17. Remark 3.4.10 (On the Dimension of the Random Pullback Attractor). We do not claim that the attractor given by Theorem 3.4.9 has a finite dimension. The main reason is that the phase space for the case considered is not Banach and thus we
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
261
cannot apply the same stochastic version of the standard quasi-stability argument as in Chueshov/Schmalfuss [55]. Although there is a version of the quasi-stability approach for complete metric spaces (see Chueshov/Lasiecka [46, Theorem 2.14]), its application requires appropriate estimates for local (ε , ρ )-capacities. These estimates can be obtained, but with random constants whose temperedness we cannot control. On the other hand, we can easily prove the existence of finite-dimensional random pullback attractors for the case when instead of the operator B˜ we consider B˜ + cid with a positive (even small) constant c. With this modification the main linear part of the model is not degenerate and we can apply the ideas developed in Chueshov/Schmalfuss [55] to obtain the result in the space X 1/2 × X.
3.4.6 Upper Semicontinuity and Synchronization Now we consider various kinds of synchronization of the system of equations (3.4.5) as the parameter κ changes. For the original system of equations (3.4.5) we can prove that the attractors Aκ are upper semicontinuous at every point κ∗ > 0, i.e., lim d 2(A n→∞ X
κn
(ω ), Aκ∗ (ω )) = 0
(3.4.27)
for every sequence {κ n } such that κ n → κ∗ > 0 as n → ∞. For this, we can apply the methods developed in Hale/Lin/Raugel [89] and Kapitansky/Kostin [98] (see also Theorem 1.2.14) and extended to the random case in Robinson [139] (see also Theorem 3.1.25). According to Robinson [139], the proof of (3.4.27) requires two ingredients: (a) the existence of an attracting compact set locally independent of κ, and (b) the convergence of cocycles in some uniform way. The latter property is simple in our situation. Indeed, let Y κ = (V0κ ,V1κ ) and Y κ∗ = (V0κ∗ ,V1κ∗ ). In addition, let V κ and V κ∗ be solutions to (3.4.5) with these initial data and parameters κ and κ∗ respectively. We can see that Δ V (t) = V κ (t) −V κ∗ (t) satisfies the equation
Δ V tt + ν BΔ V + D Δ V t + κ∗ K Δ V = Δ F, where ˜ κ + Z) − F(V ˜ κ∗ + Z). Δ F = −(κ − κ∗ )K V κ + F(V We obviously have Δ F ≤ c1 |κn − κ∗ |(id + B)1/2V κ + c2 (id + B)1/2 Δ V . Using (3.4.7), we can conclude
t (id + B)1/2V κ 2 ≤ Cα ,β Y κ + 1 + ξ (θτ ω )2 d τ , 0
262
3 Stochastic Synchronization of Random Pullback Attractors
for every κ ∈ [α , β ], where 0 < α < β < ∞ are arbitrary. Then standard energy-type calculations give the estimate V κ (t) −V κ∗ (t)X ≤ CT |κn − κ∗ | Y κ +
0
T
κ
1 + ξ (θτ τω ) d τ + Y −Y X 2
for all t ∈ [0, T ]. Thus, we have uniform continuity of the cocycle at every point κ∗ > 0. For the existence of a (locally uniform) attracting compact set, we need to impose stronger hypotheses on the system. For instance, in the basic model we need additional smoothness of the noises. This is necessary to justify an application of multipliers Bδ Vt and Bδ V with a positive δ in (3.4.5). We can also consider the case when κ → +∞, which can be important in asymptotic synchronization phenomena. To guarantee a uniform (in κ) bound of the absorbing ball we need to assume that all noises are identical (i.e., QZ = 0 and thus Z j = Z 0 for all j). Then, for every element Y (ω ) from the attractor to (3.4.5) we have R(θt ω ) → 0, κ → ∞. Qφ˜ (t, ω ,Y (ω ) ≤ κ By interpolation (1 + B)1/2−ε Qφ˜ (t, ω ,Y (ω )) → 0, κ → ∞. These observations show that the following limiting problem arises: ˜ + Dpt + F¯ m (p + Z 0 ) − Z 0 = 0, w(0) = w0 , wt (0) = w1 , ptt + ν Bp
(3.4.28)
¯ m = N −1 ∑Ni=1 F˜i (v). Using the same argument as above, we can show where F(v) that the problem in (3.4.28) generates an RDS in the space Q0 H 1/2 × T10 × H, which possesses a random pullback attractor A. Here, T10 = R/dZ is the corresponding factor space. Under additional hypotheses on the system, we can also show that the attractors Aκ given in Theorem 3.4.9 converge in some sense to the set = (y, . . . , y) : y ∈ A , A where A is the random pullback attractor for the RDS generated by (3.4.28). This means that the attractor Aκ becomes diagonal in the limit of the large intensity parameter κ. Thus, the components of the system become synchronized in this limit at the level of random pullback attractors. We do not provide full details here because our primary goal is synchronization for fixed κ.
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
263
3.4.7 Synchronization for Finite Values of the Interaction Parameter Now we consider the case of identical interacting subsystems, i.e., we assume that we have in all equations the same noise, such that Z j (ω j ) := Z 0 (ω0 ),
and F˜ j (w) := F˜1 (w), j = 2, . . .
(3.4.29)
for w ∈ H 1/2 . In this case, we observe asymptotic synchronization for finite values of κ. Theorem 3.4.11. Let Assumption 3.4.1, the property in (3.4.23), and the relations (3.4.29) hold. Assume that κ is positive, say, κ ≥ κ∗ > 0. Let ⎧ ⎫ w = (w1 , . . . , wN ) ∈ X 1/2 , ⎬ ⎨ sκ = inf ν (Aw, w) + κ(K w, w) . ⎩ ⎭ ∑N w j = 0, w = 1 j=1 Then there exist s∗ > 0 and γ > 0 such that under the condition6 sκ ≥ s∗ QVt (t)2 + (id + B)1/2 QV (t)2 + κK 1/2 QV (t)2 (3.4.30) ≤C(1 + κ)e−γ t QV1 2 + (id + B)1/2 QV0 2 , ∀ ω ∈ Ω , for every solution V (t) to (3.4.5). The projector Q is defined in (3.4.12). In this case, for all κ such that sκ ≥ s∗ . Aκ := A We note that (3.4.30) implies that # $ lim eγ˜t QVt (t)2 + (id + B)1/2 QV (t)2 = 0, ∀ ω ∈ Ω , ∀ γ˜ < γ , (3.4.31) t→∞
for every solution V (t) = (v1 (t), . . . , vN (t)) of (3.4.5). Since N
∑
i, j=1
˜ 1/2 (v j − vi ) ≤ (id + B)
N
∑
˜ 1/2 [QV ] j + (id + B) ˜ 1/2 [QV ]i (id + B)
i, j=1
≤CN (id + B)1/2 QV (t),
We can see from Lemma 3.4.5 that sκ ≥ c0 κ · inf spec(K). Thus, if K is not degenerate, then sκ → +∞ as κ → +∞. 6
264
3 Stochastic Synchronization of Random Pullback Attractors
the limit (3.4.31) can be rewritten as ! " N j i 2 1/2 j i 2 γ˜t ˜ lim e ∑ (vt (t) − vt (t)) + (id + B) (v (t) − v (t)) =0 t→∞
i, j=1
for all ω ∈ Ω . Thus, we observe exponential asymptotic synchronization. Proof. In the case considered, Qξ = 0 and thus V Q = QV satisfies the equation ˜ + Z), V Q (0) = V Q , VtQ (0) = V Q , VttQ + ν BV Q + DVtQ + κK V Q = QF(V 0 1 where V0Q = QU0 and V1Q = QU1 . As a preparation we need the following lemma: 2 ≤ 4L2 QU2 . ˜ Lemma 3.4.12. We have for σ0 < 1/2: QF(U) σ0 F˜
Proof. We can see that ˜ [QF(U)] j=
1 N ˜ ∑ [F1 (u j ) − F˜1 (ui )]. N i=1
Thus, owing to the structure of the projector P, ˜ [QF(U)] j ≤ ≤
LF˜ N LF˜ N
N
∑ u j − MU − (ui − MU)σ0 =
i=1 N
∑
[QU] j σ0 + [QU]i σ0
LF˜ N
N
∑ u j − ui σ0
i=1
i=1
because U = IMU + QU. This implies the conclusion. We consider a Lyapunov-type function of the form Ψ (t) = E(t) + Φ (t), where E(t) =
1 Q Vt (t)2 + ν B1/2V Q (t)2 , 2
Φ (t) = ρ (V Q ,VtQ ) +
κ (K V Q ,V Q ), 2
for a positive constant ρ , which will be chosen later. It follows from Lemma 3.4.5 that 1 |(V Q ,VtQ )| ≤ VtQ 2 + c0 B1/2V Q 2 + κ∗ (K V Q ,V Q ) . 2 Therefore, there exist 0 < ρ0 < 1 and βi > 0, independent of κ ≥ κ∗ such that
β0 E∗ (t) + (κ − κ∗ /2)K 1/2V Q (t)2 ≤ Ψ (t) ≤ β2 E∗ (t) + κK 1/2V Q (t)2 for all κ ≥ κ∗ and ρ ∈ (0, ρ0 ], where E∗ (t) = VtQ (t)2 + B1/2V Q (t)2 + V Q (t)2 .
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
265
We can calculate the derivative dΨ ˜ + Z),VtQ ) = − (DVtQ ,VtQ ) − (QF(V dt
˜ + Z),V Q ) + ρ VtQ 2 − (DVtQ ,V Q ) − ν (BV Q ,V Q ) − κ(K V Q ,V Q ) − (QF(V
Since D is bounded from H 1/2 into H, we obtain |(DVtQ ,V Q )| ≤ ε B1/2V Q 2 + V Q 2 +Cε −1 VtQ ||2 , ∀ε > 0. Thus, there exist bi > 0, which are independent of κ, such that
dΨ ≤ − (DVtQ ,VtQ ) − b1 ρ VtQ 2 +CVtQ V Q σ0 dt
− b2 ρ E∗ (t) + (κ − κ∗ /2)K 1/2V Q 2 + ρ cV Q (t)2 , Choosing ρ > 0 small enough and using the inequality VtQ V Q σ0 ≤ ε VtQ 2 + B1/2V Q 2 +Cε V Q ||2 , ∀ε > 0. gives dΨ + γΨ ≤ −c1 ν B1/2V Q 2 + (κ − κ∗ /2)K dt
V
1/2 Q 2
+ c2 V Q ||2
for some γ , c1 , c2 > 0. Then, taking sκ large enough, we obtain dΨ + γΨ (t) ≤ 0 dt for some γ , C > 0. This implies (3.4.30). Now we see that the Q part v in the nonlinearity of (3.4.13) tends toward zero such that the components of V tend toward the solution of (3.4.28), which has the random pullback attractor A. The equality Aκ follows easily. := A
3.4.8 Synchronization by Means of Finite-Dimensional Coupling We can see from Lemma 3.4.5 that the parameter sκ can be estimated from below as follows: ⎧ ⎫ w j ∈ H 1/2 , ∑Nj=1 w j = 0, ⎬ ⎨N sκ ≥ inf ∑ ν B˜ 1/2 w j 2 + κb0 K 1/2 w j 2 ⎩ j=1 ⎭ ∑N w j 2 = 1 j=1
266
3 Stochastic Synchronization of Random Pullback Attractors
with some b0 > 0. This means that it is not necessary to assume nondegeneracy of the operator K to guarantee large sκ . For instance, if K = PN is the orthoprojector ˜ then onto Span{ek : k = 0, 1, 2, . . . , N} given by the eigenelements of B,
ν B˜ 1/2 w2 +κb0 K 1/2 w2 ≥
∞
N
∑ (νλk + b0 κ)|(w, ek )|2 + ν ∑
k=0 N
≥b0 κ ∑ |(w, ek )|2 + νλN+1 k=1
λk |(w, ek )|2
k=N+1
∞
∑
|(w, ek )|2 ≥ min{b0 κ, νλN+1 }w2 .
k=N+1
Thus, if b0 κ ≥ νλN+1 , then we can guarantee a large value of sκ by an appropriate choice of N. This allows a generalization based on the assumption that K is a “good” approximation (in some sense) for a strictly positive operator. In particular, as in Sect. 1.6.5, we can use the theory of determining functionals (see Chueshov [34], Foias and Titi [78], Cockburn et al. [61], and the references therein) to obtain localized (in some sense) finite-dimensional forms of the interaction operator K. For instance, we can use interpolation operators related to a finite family L of linear continuous functionals {l j : j = 1, . . . , N} on H 1/2 given by N
Kv =
∑ l j (v)ψ j ,
∀ v ∈ H 1/2 ,
j=1
where {ψ j } is an appropriate finite set of elements from H 1/2 . For details in the deterministic case, see Sect. 1.6.
3.4.9 Applications As an application of the results presented above we can consider models of plates with coupling via elastic (Hooke-type) links. Namely, we consider the following stochastic PDEs dut + (γ ut + Δ 2 u + κK(u − v))dt = ( f˜1 (u) + h1 (x))dt + d ω1 in O ⊂ R2 , dvt + (γ vt + Δ 2 v + κK(v − u))dt = ( f˜2 (u) + h2 (x))dt + d ω2 in O ⊂ R2 , with free-type boundary conditions (see, for example, Chueshov/Lasiecka [47] for a discussion of these conditions). As a particular case of the model above, we can consider several versions of damped sine-Gordon equations. These are used to model the dynamics of Josephson junctions driven by a source of current (see, for example, Temam[161] for comments
3.4 Synchronization in Coupled Stochastic Sine-Gordon Wave Model
267
and references). For instance, we can consider the system dut + (γ ut − Δ u + κ(u − v))dt = (−l sin u + h(x))dt + d ω , dvt + (γ vt − Δ v + κ(v − u))dt = (−l sin v + h(x))dt + d ω ,
(3.4.32)
in a smooth bounded domain O ⊂ Rd and equipped with the homogeneous Neumann boundary conditions
∂ u ∂ v = 0, = 0. ∂ n ∂O ∂ n ∂O It is convenient to introduce new variables w=
u+v u−v and z = , 2 2
(3.4.33)
in which the problem in (3.4.32) can be rewritten in the form dwtt + γ wt − Δ w + 2κw = −l sin w cos z, dzt + (γ zt − Δ z)dt = (−l cos w sin z + h(x))dt + d ω , ∂ w ∂ z = 0, = 0. ∂ n ∂O ∂ n ∂O
(3.4.34)
The main linear part in the first equation of (3.4.34) is not degenerate when κ > 0. Therefore, the same calculations as in the proof of Theorem 3.4.11 show that there exists κ∗ such that ∃ η > 0 : w(t)2H 1 (O) + wt (t)2L2 (O) ≤ CB e−η t , t > 0, when κ ≥ κ∗ for all initial data from a bounded set B in H 1 (O) × L2 (O). This means that every trajectory is asymptotically synchronized. Moreover, it follows from the reduction principle (see Chueshov [43, Section 2.3.3]) that the limiting (synchronized) dynamics is determined by the single equation dzt + (γ zt − Δ z)dt = (−l sin z + h(x))dt + d ω ,
∂ z = 0. ∂ n ∂O
The long-time dynamics of this equation is described in Fan [72] and Shen/Shen/Zhou [152] for some special types of noise. Another coupled sine-Gordon system of interest is dut + (γ ut − Δ u)dt = (−l sin(u − v) + h1 (x))dt + d ω , dvt + (γ vt − Δ v)dt = (−l sin(v − u) + h2 (x))dt + d ω ,
∂ u ∂ v = 0, = 0. ∂ n ∂O ∂ n ∂O
268
3 Stochastic Synchronization of Random Pullback Attractors
Formally, this model is outside the scope of the theory developed above. However, using the ideas presented there, we can answer some questions about its synchronized regimes.7 In particular, with the variables w and z given by (3.4.33) we obtain the equations wtt + γ wt − Δ w = −l sin 2w + g(x), dzt + (γ zt − Δ z)dt = h(x)dt + d ω ,
∂ w ∂ z = 0, = 0, ∂ n ∂O ∂ n ∂O
(3.4.35)
where g(x) =
1 (h1 (x) − h2 (x)) , 2
h(x) =
1 (h1 (x) + h2 (x)) . 2
Thus, the difference between two solutions converges to a deterministic attractor, whereas their average converges (in a pullback sense) to a tempered random variable generating a stationary Ornstein–Uhlenbeck process. Indeed, the above equation for z has a similar structure to that discussed in Sect. 3.4.2.
7
In the deterministic case the synchronization phenomena in (3.4.35) was studied in Sect. 1.6.
Chapter 4
Master–Slave Synchronization in Random Systems
Our main goal in this chapter is to extend the previous deterministic results on master–slave synchronization to the case of systems with randomness. We deal with an abstract system of two coupled nonlinear stochastic (infinite-dimensional) equations subjected to additive white noise. This kind of system may describe various interaction phenomena in a continuum random medium. Under suitable conditions, we prove the existence of an exponentially attracting random invariant manifold for the coupled system. Thus, we observe synchronization phenomena in the coupled system. As applications we consider stochastic perturbations of the models discussed in Chap. 2. We also show that the random manifold constructed converges to its deterministic counterpart when the intensity of noise tends toward zero. This indicates some kind of persistence of synchronization under stochastic perturbations. The theory of invariant manifolds for various classes of infinite-dimensional random1 dynamical systems has been developed by many authors (see Bensoussan/Flandoli [13], Chueshov [33], Chueshov/Girya [42, 82], Chueshov/Scheutzow [51], Duan/Lu/Schmalfuss [68, 69], and the references therein). In the first part of this chapter, we rely on the idea of the Lyapunov–Perron method in the Miklavˇciˇc form (see Chap. 2 for the (original) deterministic version). Its realization requires several important facts concerning Ornstein–Uhlenbeck-type processes. In the second section of this chapter, we deal with the random graph transform allowing us to find master–slave synchronization.
4.1 General Idea of the Random Invariant Manifold Method We start with (additive) stochastic perturbation of the abstract model considered in Sect. 2.2 and follow mainly the presentation given in Chueshov/Schmalfuss [54]. 1
For the deterministic case, we refer the reader to the discussion in Chap. 2.
© Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3 4
269
270
4 Master–Slave Synchronization in Random Systems
Let X1 and X2 be (infinite-dimensional) separable Hilbert spaces. The main object is the following system of stochastic differential equations dU + A˜ 1U dt = F1 (U,V ) dt + d ω1 , t > 0, in X1 ,
(4.1.1)
dV + A˜ 2V dt = F2 (U,V ) dt + d ω2 , t > 0, in X2 ,
(4.1.2)
and
where A˜ 1 and A˜ 2 are generators of C0 semigroups, F1 and F2 are continuous (nonlinear) mappings, F1 : X1 × X2 → X1 ,
F2 : X1 × X2 → X2 .
Here and below, ω1 and ω2 are canonical Brownian motions in X1 and X2 , which will be specified later. We set ω = (ω1 , ω2 ) ∈ X = X1 × X2 . Our main goal is to engage the theory of random invariant manifolds in the study of synchronization phenomena in the stochastic problem (4.1.1), (4.1.2). As in the deterministic case, this leads to the question on the existence of a random invariant manifold of the following form M (ω ) = {(U,V ) ∈ X1 × X2 : U = m(ω ,V ), V ∈ X2 } , where m : Ω × X2 → X1 is a Carath´eodory mapping so that m(ω , ·) : X2 → X1 is a (random) Lipschitz mapping, and thus we arrive at the following random version of Definition 2.1.1. Definition 4.1.1. Equation (4.1.1) is said to be master–slave (asymptotically) synchronized with (4.1.2), if there exists a random Lipschitz mapping m : Ω × X2 → X1 such that lim U(t) − m(θt ω ,V (t))X1 = 0 almost surely
t→+∞
(4.1.3)
for any solution (U(t),V (t)) to the problem in (4.1.1), (4.1.2). In this case, (4.1.2) is called master equation/system and (4.1.1) is the slave equation/system.
4.1.1 Hypotheses and Auxiliary Facts We consider a system of stochastic differential equations (4.1.1), (4.1.2) under the following hypotheses. Assumption 4.1.2. We assume 1. For i = 1, 2 the operator A˜ i is the generator of a strongly continuous linear semigroup Si in Xi . Moreover, we assume that S2 can be extended to a group of linear
4.1 General Idea of the Random Invariant Manifold Method
271
bounded operators and the following dichotomy-type inequalities hold S1 (t) ≤ M1 exp(−γ1t), t ≥ 0, (4.1.4) S2 (t) ≤ M2 exp(−γ2t), t ≤ 0, for some constants Mi ≥ 1, γ1 > 0, γ2 ≥ 0. 2. F1 and F2 are nonlinear mappings, F1 : X1 × X2 → X1 ,
F2 : X1 × X2 → X2 ,
and there exist constants L1 and L2 such that 1/2 F1 (U1 ,V1 ) − F1 (U2 ,V2 )X1 ≤ L1 U1 −U2 2X1 + V1 −V2 2X2
(4.1.5)
1/2 F2 (U1 ,V1 ) − F2 (U2 ,V2 )X2 ≤ L2 U1 −U2 2X1 + V1 −V2 2X2
(4.1.6)
and
3. Let (Ω1 , F1 , P1 ) = (C0 (R, X1 ), B(C0 (R, X1 )), P1 ), (Ω2 , F2 , P2 ) = (C0 (R, X2 ), B(C0 (R, X2 )), P2 ) be two independent canonical Brownian motions with trace class covariance Qi . We then consider the product space (Ω1 × Ω2 , F1 ⊗ F2 , P1 × P2 ) = (Ω , F , P). Let ω = (ω1 , ω2 ) ∈ Ω1 × Ω2 . On this probability space we introduce the measurable flow θ defined by θt ω (·) = ω (· + t) − ω (t), which gives for the above probability space an MDS. The conditions above concerning the deterministic part of the problem in (4.1.1), (4.1.2) are the same as in Assumption 2.2.1. However, in contrast to the deterministic case (see Remark 2.2.2), we cannot avoid the global Lipschitz requirement for F1 and F2 because in general the truncation procedure in the random case leads to random Lipschitz constants. Our main objective in this section is to prove a reduction principle and establish the possibility of synchronization for the RDS generated by the problem in (4.1.1), (4.1.2), which will allow us to rewrite our coupled system as an equivalent problem for a single stochastic equation in X2 with a conveniently modified nonlinear term. First of all, we rewrite the system in (4.1.1), (4.1.2) as a single first-order stochastic equation dY + AY dt = F(Y ) dt + d ω , t > 0,
Y (0) = Y0 ,
(4.1.7)
272
4 Master–Slave Synchronization in Random Systems
where Y = Y (t) = (U(t),V (t)), ω = (ω1 , ω2 ) and A˜ 1 0 F1 (U,V ) A= . , F(Y ) = F2 (U,V ) 0 A˜ 2 We can see that the operator A generates a strongly continuous semigroup S in X and 1 S (t) 0 S(t) = . 0 S2 (t) We consider now the problem in (4.1.7) in the space X = X1 × X2 , which is equipped with the norm 1/2 Y X = U2X1 + V 2X2 , Y = (U,V ). Below, we denote by Q and P the orthoprojections in X onto the first and second components, i.e., Q(U,V ) = (U, 0) U
P(U,V ) = (0,V ) V
and
for (U,V ) ∈ X. (4.1.8)
Let us introduce the random variables Z1 (ω ) = Z(ω1 ) =
0 −∞
S1 (−r)d ω1 (r).
(4.1.9)
We assume according to Sect. 3.1.3 that these random variables exist and have a version given by the variation of constants formula such that t → Z(θt ω ) solves (3.1.12) with initial condition Z(ω ). In particular, we consider the version of this random variable Z1 (ω ) = Z1 (ω1 ) = −A˜ 1
0 −∞
S1 (−r)ω1 (r)dr
(see Theorem 3.1.27). We can assume that the process t → Z1 (θt ω ) is continuous in X1 . We can also define Z1 (t, ω ) := Z1 (θt ω ) − S1 (t − s)Z1 (θs ω )
for s ≤ t,
which is a version of the random process t →
t s
S1 (t − r)d ω1 (r),
t ≥ s.
Since S2 is C0 -group in addition to the estimate (4.1.4), we also have that S2 (t) ≤ M˜ 2 exp(γ˜2t), t ≥ 0,
4.1 General Idea of the Random Invariant Manifold Method
273
for some constant M˜ 2 ≥ 1 and γ˜2 ≥ 0 (see Pazy [127]). In particular, this implies that for a = γ˜2 + 1, the operator A˜ 2 + a is the generator of a strongly continuous semigroup S˜2 of exponentially stable type, i.e., S˜2 (t) ≤ M˜ 2 exp(−t), t ≥ 0. We can now define similar to the above random variable Z1 a new random variable Z˜ 2 , where S1 is replaced by S˜2 and ω1 by ω2 . Consider Z2 (t, ω ) := Z2 (t, ω2 ) =
t 0
S2 (t − r)d ω2 (r).
The following lemma gives a version Z2 of this random field Lemma 4.1.3. We have Z2 (t, ω ) = Z˜ 2 (t, ω ) + a
t 0
S2 (t − r)Z˜ 2 (r, ω )dr,
(4.1.10)
where Z˜ 2 (t, ω ) is the nonstationary Ornstein–Uhlenbeck process generated by S˜2 . Proof. Note that Z2 (t, ω ) satisfies dZ2 + A˜ 2 Z2 dt = d ω2 ,
Z2 (0) = 0.
We have for Z˜ 2 d Z˜ 2 + (A˜ 2 + a)Z˜ 2 dt = d ω2 ,
Z˜ 2 (0) = 0.
Let Δ Z = Z2 − Z˜ 2 satisfying d Δ Z + A˜ 2 Δ Z = aZ˜ 2 dt. Solving this equation we obtain the conclusion of the lemma. According to Sect. 3.1.3 we can define Z1 (ω ), Z1 (t, ω ) on a θ invariant set in F1 of full P1 measure and similar for Z˜ 2 (ω ) and Z2 (t, ω ) on a θ invariant set in F2 . Outside these invariant sets we can set these random variables to zero. Proposition 4.1.4. We can choose for the stochastic integral t s
S1 (t − r)d ω1 (r)
the version Z1 (θt ω1 ) − S1 (t − s)Z1 (θs ω1 ) s ≤ t. In a similar manner, we obtain that the convolution integral for S2 can be expressed by Z˜ 2 .
274
4 Master–Slave Synchronization in Random Systems
In the following we will interpret t s
Si (t − r)d ωi (r)
by the expressions introduced in Proposition 4.1.4. Then, we have the additivity t−s 0
Si (t − s − r)d θs ωi (r) + Si (t − s) t s
Si (t − r)d ωi (r) + Si (t − s)
s 0
s 0
Si (s − r)d ωi (r) = Si (s − r)d ωi (r) =
t 0
Si (t − r)d ωi (r) (4.1.11)
for 0 ≤ s ≤ t and for ω in a θ invariant set of full measure.
4.1.2 Mild Solutions and Generation of an RDS As above, we denote by C([0, T ], X) the space of strongly continuous functions on the interval [0, T ] with values in X. Definition 4.1.5. Let T > 0 and Y0 ∈ X. A process Y (t) = Y (t, ω ,Y0 ), which, for each ω ∈ Ω , belongs to the space C([0, T ], X), is said to be a mild solution to the problem in (4.1.7) on the interval [0, T ] if Y (0) = Y0 and t
Y (t) = S(t)Y0 +
0
S(t − τ )F(Y (τ ))d τ +
t 0
S(t − τ )d ω (τ )
(4.1.12)
for all t ∈ [0, T ] and ω ∈ Ω . We have the following result on the existence and uniqueness of mild solutions to (4.1.7). Theorem 4.1.6. For every Y0 ∈ X and T > 0 the problem in (4.1.7) has a unique mild solution Y (t) on the interval [0, T ]. Furthermore, the process t → Y (t, ω ) is adapted to the filtration generated by ω . Define the map φ : R+ × Ω × X → X by the formula φ (t, ω ,Y0 ) := Y (t, ω ,Y0 ). Then, φ is a continuous RDS. Proof. By the standard fix point argument one can prove the existence of a unique mild solution on any interval [0, T ] where the solution depends continuously on the initial condition. We obtain the cocycle property for φ quite similar to the proof of Theorem 3.1.35. For the last term in (4.1.12), we apply (4.1.11), which completes the proof.
4.1 General Idea of the Random Invariant Manifold Method
275
4.1.3 Existence of an Invariant Manifold Now we state the main result concerning system (4.1.1), (4.1.2). Theorem 4.1.7. Suppose that Assumption 4.1.2 holds and
γ 1 − γ2 >
2 √ √ M2 L2 + M1 L1 .
(4.1.13)
Then there exists a Carath´eodory mapping m : Ω × X2 → X1 such that m(ω , x2 ) − m(ω , x2 )X1 ≤ Cx2 − x2 X2 , for all x2 , x2 ∈ X2 , where C is a (deterministic) constant. Moreover, the random manifold M (ω ) = {(m(ω , x2 ), x2 ) : x2 ∈ X2 } ⊂ X,
(4.1.14)
is forward invariant with respect to the RDS φ , i.e., φ (t, ω , M (ω )) ⊂ M (θt ω ). M is exponentially attracting in the following sense: for any mild solution Y (t, ω ) to (4.1.7) there exists Y ∗ ∈ M (ω ) such that
∞
0
e2μ t Y (t, ω ) − φ (t, ω ,Y ∗ )2X dt
1/2 < R1 (ω ) +CY (0)X ,
(4.1.15)
and, in addition, Y (t, ω ) − φ (t, ω ,Y ∗ )X ≤ e−μ t (R2 (ω ) +CY (0)X ) ,
t > 0,
(4.1.16)
where √ √ M2 L2 γ1 + M1 L1 γ2 √ μ= √ , M2 L2 + M1 L1
(4.1.17)
R1 and R2 are scalar tempered random variables and C is a deterministic constant. Remark 4.1.8. It follows from (4.1.16) and from the invariance property of M (ω ) with respect to the RDS φ that sup {dX (φ (t, ω ,Y0 ), M (θt ω )) : Y0 ∈ B} ≤ CB (ω )e−μ t ,
ω ∈ Ω,
for any bounded set B from X. Since R2 is tempered, the relation in (4.1.16) also implies that lim sup eμ˜ t dX (φ (t, θ−t ω ,Y0 ), M (ω )) : Y0 ∈ B = 0, ω ∈ Ω , t→∞
for any μ˜ < μ . Thus, the manifold M (ω ) is uniformly exponentially attracting in both the forward and the pullback sense.
276
4 Master–Slave Synchronization in Random Systems
The rest of this section is devoted to the proof of Theorem 4.1.7. We proceed in several steps. Construction of the Manifold M As in Chap. 2 we use the standard Lyapunov–Perron procedure in the form suggested in Miklavˇciˇc [113]) (see Sect. 2.2). We consider the integral equation Y (t) = TPY0 [Y, ω ](t),
t ≤ 0,
(4.1.18)
where PY0 ∈ PX, TPY0 [Y, ω ] := IPY0 [F(Y ), ω ] and IPY0 [Y, ω ] is given by IPY0 [Y, ω ](t) =S2 (t)PY0 −
0 t
S2 (t − τ )PY (τ )d τ +
t −∞
S1 (t − τ )QY (τ )d τ
− S2 (t)Z2 (−t, θt ω ) + Z1 (θt ω ) 2 :=Idet PY0 [Y ](t) − S (t)Z2 (−t, θt ω ) + Z1 (θt ω ).
(4.1.19) Here, Q and P are defined by (4.1.8) and Z1 (ω ) is defined by the exponential stable semigroup S1 in the sense of Sect. 3.1.3. We consider(4.1.18) and the operators TPY0 and IPY0 in the spaces X = {Y (·) : eμ ·Y (·) ∈ L2 (−∞, 0, X)} , where μ ∈ (γ2 , γ1 ) will be chosen later, with the norm |Y |X =
0
−∞
2μ t
e
1/2 Y (t)2X dt
.
We first point out some properties of the stochastic term in (4.1.18), which is useful in our considerations. By the temperedness of Z1 X1 , by the fact that ω (t)X is subexponentially growing, and that γ2 < μ we have that the random process Z(t, ω ) := −S2 (t)Z2 (−t, θt ω ) + Z1 (θt ω )
(4.1.20)
is in the space X . Similar to the deterministic case (see Proposition 2.2.7) we can prove the following assertion. Proposition 4.1.9. Let γ2 < μ < γ1 . Then, for every PY0 ∈ X2 and ω ∈ Ω the operator TPY0 [·, ω ] is continuous from X into itself and |TPY0 1 [Y1 , ω ] − TPY0 2 [Y2 , ω ]|X ≤ PY0 1 − PY0 2 X + κ (μ ) · |Y1 −Y2 |X ,
ω ∈ Ω,
4.1 General Idea of the Random Invariant Manifold Method
277
for every PY0 1 , PY0 2 ∈ X2 and Y1 ,Y2 ∈ X , where
κ (μ ) =
M2 L2 M1 L1 + . μ − γ2 γ 1 − μ
(4.1.21)
Proof. The argument relies on Lemma 2.2.8 and on the following analog of Proposition 2.2.9. Proposition 4.1.10. For PY0 ∈ X2 , the operator 2 Y → Idet PY0 [Y ](t) :=S (t)PY0
−
0 t
S2 (t − τ )PY (τ )d τ +
t −∞
S1 (t − τ )QY (τ )d τ
is a continuous mapping from X into itself and for any Y1 ,Y2 ∈ X we have that det |Idet PY0 [PY1 ] − IPY0 [PY2 ]|X ≤
M2 · |PY1 − PY2 |X μ − γ2
and det |Idet PY0 [QY1 ] − IPY0 [QY2 ]|X ≤
M1 · |QY1 − QY2 |X . γ1 − μ
We obtain the conclusion by Proposition 2.2.9. Then, we have for the transform T TPY0 1 [Y1 , ω ] − TPY0 2 [Y2 , ω ] = Idet [F(Y1 ), ω ] − Idet [F(Y2 ), ω ] PY 1 PY 2 0
0
for any Y1 ,Y2 ∈ X . Now we are in a position to construct an invariant manifold. Let μ be given by (4.1.17). In this case, √ 2 √ M2 L2 + M1 L1 κ (μ ) = γ 1 − γ2 and we have that κ (μ ) < 1 under the condition (4.1.13). Thus, TD [·, ω ], D ∈ X2 is a contraction in X and hence (4.1.18) has a unique solution Y (·) = Y (·, ω , D) in the space X for each ω ∈ Ω . Using the same (standard) argument as in the deterministic case (see Sect. 2.2), we can show that this solution Y (·) possesses the properties Y (·) ∈ C((−∞, 0], X), and
sup eμ t Y (t, ω , D1 ) −Y (t, ω , D2 )X ≤ CD1 − D2 X2 t≤0
278
4 Master–Slave Synchronization in Random Systems
for any D1 , D2 ∈ PX = X2 and ω ∈ Ω , where C is a positive constant. Now we define m : Ω × X2 → X1 as m(ω , D) :=
0
S1 (−τ )F1 (Y (τ , ω , D))d τ + Z1 (ω ),
−∞
D ∈ X2 ,
(4.1.22)
where Y (t) = Y (t, ω , D), D = PY0 solves the integral equation (4.1.18). According to (4.1.14) the graph of m generates the random manifold M . To prove the forward invariance of M , we show that Y˜ = Y (·, θT ω , Pφ (T, ω ,Y0 )),
Y0 = m(ω , D) + D ∈ M (ω )
solves the equation Y˜ = TPφ (T,ω ,Y0 ) [Y˜ , θT ω ] in X for every T ≥ 0, ω ∈ Ω . For this, we check that φ (σ + T, ω ,Y0 ) : σ ∈ [−T, 0] Y˜ (T, σ , ω ) := σ < T, Y (σ + T, ω , PY0 ) : where Y is the fixed point of TPY0 (·, ω ). We have Qφ (t, ω ,Y (0, ω , PY0 )) =S1 (t)
0
t
+ 0
−∞
S1 (−r)F1 (Y (r, ω , PY0 ))dr + S1 (t)Z1 (ω )
S1 (t − r)F1 (φ (r, ω ,Y0 ))dr
+ Z1 (θt ω ) − S1 (t)Z1 (ω ). For σ ∈ [−T, 0], σ = t − T we then obtain QY˜ (T, σ , ω ) =Qφ (T + σ , ω ,Y (0, ω , PY0 )) =S (σ + T ) 1
+
=
0 −T
−∞
+ =
σ +T
−T
−∞
−∞
S1 (−r)F1 (Y (r, ω , PY0 ))dr
S1 (σ + T − r)F1 (φ (r, ω ,Y0 ))dr + Z1 (θσ +T ω )
S1 (σ − r)F1 (Y (r + T, ω , PY0 ))dr
σ
σ
0
S1 (σ − r)F1 (φ (r + T, ω ,Y0 ))dr + Z1 (θσ θT ω )
S1 (σ − r)F1 (Y˜ (T, r, ω ))dr + Z1 (θσ +T ω ).
Now, we consider the Q part of Y˜ for σ < −T .
4.1 General Idea of the Random Invariant Manifold Method
σ −∞
279
S1 (σ − r)F1 (Y˜ (T, r, ω ))dr + Z1 (θσ θT ω )
=
σ +T −∞
S1 (σ + T − r)F1 (Y (r, ω , PY0 ))dr + Z1 (θσ θT ω )
=QY (σ + T, ω , PY0 ) = QY˜ (T, σ , PY0 ). We then have together QY˜ = QTPφ (T,ω ,Y0 ) [Y˜ , θT ω ]. Now, we deal with the P part. Note that S2 is a group on X2 . For −T ≤ σ = t − T ≤ 0 we have PY˜ (T, σ , ω ) = Pφ (σ + T, ω ,Y0 ) =Pφ (t, ω ,Y (0, ω , PY0 )) t
=S2 (t)PY0 + S2 (t − r)F2 (φ (r, ω ,Y0 ))dr + Z2 (t, ω ) 0 T 2 2 2 =S (t − T ) S (T )PY0 + S (T − r)F2 (φ (r, ω ,Y0 ))dr + Z2 (T, ω ) t
+ T
0
S (t − r)F2 (φ (r, ω ,Y0 ))dr − S2 (σ )Z2 (−σ , θσ θT ω ) 2
=S (σ )Pφ (T, ω ,Y0 ) + 2
σ
S2 (σ − r)F2 (φ (r + T, ω ,Y0 )dr
0
− S2 (σ )Z2 (−σ , θσ θT ω ) =S2 (σ )Pφ (T, ω ,Y0 ) −
0 σ
S2 (σ − r)F2 (Y˜ (T, r, ω ))dr
− S2 (σ )Z2 (−σ , θσ θT ω ) by (4.1.11). The case σ < −T can be considered in a similar manner. By the definition of Y˜ (T, 0, ω ) we have
φ (T, ω ,Y0 ) = Y˜ (T, 0, ω ) ∈ M (θT ω ) because Y˜ (T, ·, ω ) is fixed point of TPφ (T,ω ,Y0 ) [·, θT ω ]. Tracking Properties We modify the method as in Sect. 2.2. We consider the following space % ∞ e2μ t Y (t)2X dt < ∞ . Z = Y : R → X, measurable : |Y |2Z := −∞
For Y0 = (U0 ,V0 ) ∈ X let us define the random function
280
4 Master–Slave Synchronization in Random Systems
W0 (t, ω ) =
⎧ ⎨ −Y0 + TPY0 [Y, ω ](t),
for t ≤ 0;
S(t) −Y0 + TPY0 [Y, ω ](0) , for t > 0,
⎩
where T is the same as in (4.1.18). Below we need the following properties of the random function W0 (t, ω ). Lemma 4.1.11. For every ω ∈ Ω the random function W0 (t, ω ) belongs to Z . Moreover, there exist a deterministic constant C and tempered random variables R1 and R2 such that |W0 |Z ≤ R1 (ω ) +CY0 X and sup eμ t W0 (t)X ≤ R2 (ω ) +CY0 X . t∈R
(4.1.23) Proof. We split W0 (t, ω ) into deterministic and stochastic parts, W0 (t, ω ) = W0det (t) +W0st (t, ω ), where W0det (t) = and W0st (t, ω ) =
⎧ det ⎨ −Y0 + IPY0 [F(Y )](t), ⎩
for t ≤ 0;
S(t)(−Y0 + Idet PY0 [F(Y )](0)), for t > 0,
⎧ ⎨ Z1 (θt ω ), −S2 (t)Z2 (−t, θt ω ) , for t ≤ 0; ⎩
S1 (t)Z1 (ω ), 0 ,
for t > 0,
Note that R∗1 (ω ) := |W0st (ω )|2Z 0
M2 ≤ e2μ t S2 (t)Z2 (−t, θt ω )2X2 + Z1 (θt ω )2X1 dt + 1 Z1 (ω )2X1 2γ1 −∞ is a tempered random variable. The random variable R∗2 (ω ) := sup eμ t W0st (t, ω )X ≤ c0 sup eμ t S2 (t)Z2 (t, ω )X2 + Z1 (θt ω )X1 , t∈R
t∈R−
is tempered too for some positive constant c0 . We consider here only the temperedness of R∗1 . The temperedness of R∗2 follows in a similar manner.
4.1 General Idea of the Random Invariant Manifold Method
281
The last term in the above formula is a straightforward tempered random variable (see Theorem 3.1.27). We consider for T → −∞ 0 −∞
e2μ t Z1 (θt+T ω )2X1 dt ≤
0 −∞
e2μ t Cε (ω )2 e2ε (−t−T ) dt
≤e−2T ε
0 −∞
e2μ t Cε (ω )2 e−2ε t dt
which has straightforwardly finite exponential growth for T → −∞ (see Remark 3.1.6 when ε > 0 is small). We now present Z2 by (4.1.10). Z˜ 2 is generated by an exponential stable semigroup S˜2 . We have for the first term of this formula Z˜ 2 (−t, θt ω ) = Z˜ 2 (ω ) − S˜2 (−t)Z˜ 2 (θt ω ). Then, for the first term of the integral estimating R∗1 we have 0
0
e2μ t S2 (t)2 Z˜ 2 (θT ω )2X2 dt + e2μ t S˜2 (−t)2 S2 (t)2 Z˜ 2 (θT +t ω )2X2 dt −∞ −∞ 0 2 2(μ −γ2 )t ˜ 2 ˜ ≤ C Z2 (θT ω )X2 + e Z2 (θt+T ω )X2 dt < ∞, −∞
when μ > γ2 . Note that S˜2 is an exponentially stable semigroup. The finite exponential growth follows in a similar manner to the above considerations. R∗1
Finally, we consider for the second part of the first expression of the estimate for 0
a
−∞
e2μ t S2 (t)
−t 0
S2 (−t − r)Z˜ 2 (r, θt+T ω )dr2X2 dt,
Z˜ 2 (r, θt+T ω ) = Z˜ 2 (θt+T +r ω ) − S˜2 (r)Z˜ 2 (θt+T ω ). We can split Z˜ 2 (r, θt+T ω ) into two expressions (see above). For the first expression, we obtain 0
a
−t
e2μ t S2 (t) S2 (−t − r)Z˜ 2 (θT +t+r ω )dr2X2 dt −∞ 0 2 −t 0 2μ t −γ2 (−r) ˜ 2 e e Z2 (θt+T +r ω )X2 dr dt ≤a −∞
≤a ≤a
0
−∞ 0 −∞
0
2(μ −γ2 )t
−t
e
0
e2(μ −γ2 )t
−t 0
Z˜ 2 (θt+T +r ω )2X2 drdt Cε (ω )e2ε (−t−T +r) drdt,
which has a finite exponential growth for T → −∞ by μ > γ2 and ε > 0 being sufficiently small. We now consider the above integral for the second term on the
282
4 Master–Slave Synchronization in Random Systems
right-hand side of the last formula: 0
a
−∞
e2μ t
−t
0
≤aM˜ 22 ≤C
X2
0
0 −∞
2 S2 (−r)S˜2 (r)Z˜ 2 (θt+T ω )dr dt
−∞
|t|2 e2(μ −γ2 )t Z˜ 2 (θt+T ω )2X2 dt
e(μ −γ2 )t Z˜ 2 (θt+T ω )2X2 dt
which is subexponentially growing for T → −∞ when we assume γ2 < μ . Now, we can apply 3.1.6. The temperedness of R∗2 follows in a similar manner. Therefore, estimating the deterministic part W0det (t) using the standard method, we arrive at the estimates (4.1.23) with Ri (ω ) = C1 +C2 R∗i (ω ), where C1 and C2 are deterministic constants. Now, we define an integral operator R : Z → Z by the formula t
R[W ](t) = W0 (t) + −
∞ t
−∞
S(t − τ )Q [F(W (τ ) +Y (τ )) − F(Y (τ ))] d τ
S(t − τ )P [F(W (τ ) +Y (τ )) − F(Y (τ ))] d τ .
(4.1.24)
Here, Y is a solution of (4.1.12). For t < 0, we set Y (t) = Y0 . Let us prove that R is a contraction in Z . By (4.1.5) and (4.1.4), we have that eμ t P (R[W1 ](t) − R[W2 ](t)) X ≤ M2 L2
∞ t
e(μ −γ2 )(t−τ ) eμτ W1 (τ ) −W2 (τ )X2 d τ
(4.1.25)
By Lemma 2.2.8 with δ = μ − γ2 and f (t) = M2 L2 eμ t W1 (t) −W2 (t)X , we obtain that |P (R[W1 ] − R[W2 ]) |Z ≤
M2 L2 · |W1 −W2 |Z . μ − γ2
Similarly, eμ t Q (R[W1 ](t) − R[W2 ](t)) X ≤ M1 L1
t
e(μ −γ1 )(t−τ ) eμτ W1 (τ ) −W2 (τ )X d τ
−∞
and thus applying Lemma 2.2.8 again we have that |Q (R[W1 ] − R[W2 ]) |Z ≤
M1 L1 · |W1 −W2 |Z . γ1 − μ
4.1 General Idea of the Random Invariant Manifold Method
283
If μ is given by (4.1.17), we have that |R[W1 ] − R[W2 ]|Z ≤ κ (μ ) · |W1 −W2 |Z
for every W1 ,W2 ∈ Z
(4.1.26)
where κ (μ ) < 1. Thus, by the contraction principle, there exists a unique solution W ∈ Z to the equation W = R[W ]. Now, using the same calculation as in Sect. 2.2, we can conclude that the function Y (t) = W (t) + Y (t), where W ∈ Z solves the equation W = R[W ], satisfies the relation ⎧ ⎨ TPY(0) [Y , ω ](t), if t ≤ 0; Y (t) = ⎩ φ (t, ω , Y (0)), if t > 0. In particular, Y (0) = TPY(0) [Y , ω ](0) and, therefore, by the definition of the operator TPY(0) , we obtain that Y (0) = PY (0) +
0 −∞
S1 (−τ )QF(Y (τ ))d τ + Z1 (ω ).
By (4.1.22) this implies that Y (0) = m(ω , PY (0)), PY (0) . Therefore, Y (t) = φ (t, ω , Y (0)) ∈ M (θt ω ) for t ≥ 0. Thus, to complete the proof, we only need to establish (4.1.15) and (4.1.16). Since Y (t) = W (t) +Y (t) and W (t) = R[W ](t) = W0 (t) + R[W ](t) − R[0](t),
(4.1.27)
from (4.1.23) and (4.1.26), we obtain the relation |W |Z ≤
1 1 |W0 |Z ≤ R1 (ω ) +CY0 X , 1 − κ (μ ) 1 − κ (μ )
(4.1.28)
which implies (4.1.15). Now we prove (4.1.16). From (4.1.25) we have that ∞
eμ t P (R[W ](t) − R[0](t)) X ≤ M2 L2 e(μ −γ2 )(t−τ ) · eμτ W (τ )X d τ t ∞
1/2 M2 L2 · |W |Z . ≤ M2 L2 e2(μ −γ2 )(t−τ ) d τ · |W |Z = t 2(μ − γ2 ) Thus, M2 L2 · |W |Z . sup eμ t P (R[W ](t) − R[0](t)) X ≤ 2(μ − γ2 ) t∈R
(4.1.29)
284
4 Master–Slave Synchronization in Random Systems
Similarly, we have that Q (R[W ](t) − R[0](t)) X ≤ M1 L1
t
−∞
e−γ1 (t−τ ) W (τ )X d τ ≤
(4.1.30) M1 L1 2(γ1 − μ )
· |W |Z .
Consequently, using the relations in (4.1.27), (4.1.29) and (4.1.23), we obtain that sup eμ t W (t)X ≤ cR2 (ω ) + c |W |Z . t∈R
Thus, by (4.1.28), we have sup eμ t W (t)X ≤ c(R1 (ω ) + R2 (ω )) + cY0 X , t∈R
with appropriate (deterministic) constant c. This implies (4.1.16) and completes the proof of Theorem 4.1.7.
4.1.4 The Reduced System Assume that the hypotheses of Theorem 4.1.7 hold and let m be given by (4.1.22). Consider the problem dV + A˜ 2V dt = F2 (m(θt ω ,V ),V )dt + d ω2 , t > 0, in X2 , (4.1.31) V (0) = V0 , and define its mild solution on any interval [0, T ] as a random function V (·, ω ) ∈ C([0, T ], X2 ) such that 2
t
V (t) = S (t)V0 +
0
S2 (t − τ )F2 (m(θt ω ,V (τ )),V (τ ))d τ + Z2 (t, ω )
(4.1.32)
for almost all t ∈ [0, T ] and ω ∈ Ω , where Z2 (t) is given in (4.1.9). Proposition 4.1.12. Let V0 ∈ X2 . Then, under the conditions of Theorem 4.1.7, the problem in (4.1.31) has a mild solution on the interval [0, T ]. This solution is unique and any mild solution Vˆ (t) to the problem in (4.1.31) generates a mild solution to the problem in (4.1.1) (4.1.2) by the formula, Y (t) = (U(t),V (t)) = (m(θt ω , Vˆ (t)), Vˆ (t)).
(4.1.33)
Moreover, in this case, the manifold M is (strictly) invariant with respect to the RDS φ generated by (4.1.1) and (4.1.2).
4.1 General Idea of the Random Invariant Manifold Method
285
Proof. Let Y (t) = (U(t),V (t)) be a mild solution to the problem in (4.1.7) with the initial data Y0 = (m(ω ,V0 ),V0 ). Since M given by (4.1.14) is invariant, we have that PY (t) = V (t) and
QY (t) = m(θt ω ,V (t)).
Consequently, V (t) is a mild solution to (4.1.31). By Theorem 4.1.7 we have that m(ω ,V01 ) − m(ω ,V02 ))X1 ≤ CV01 −V02 X2 ,
V0i ∈ X2 .
This implies that the function V → Fm (ω ,V ) := F2 (m(ω ,V ),V ),
V ∈ X2 ,
is globally Lipschitz, i.e., Fm (ω ,V 1 ) − Fm (ω ,V 2 ))X2 ≤ CV 1 −V 2 X2 ,
V i ∈ X2 .
(4.1.34)
Therefore, a Gronwall-type argument gives us the uniqueness of solutions to (4.1.31). The relation in (4.1.33) easily follows from the uniqueness theorem for (4.1.31). The property in (4.1.34) also makes it possible to solve (4.1.32) backward in time and, hence, we can prove that M is strictly invariant with respect to the cocycle φ . Observe now that Theorem 4.1.7 implies that for any mild solution Y (t) = (U(t),V (t)) to the problem in (4.1.1) and (4.1.2) with initial data V0 ∈ X, there exists a mild solution Vˆ (t) to the reduced problem (4.1.31) such that V (t) − Vˆ (t)2X2 + U(t) − m(θt ω , Vˆ (t))2X1 ≤ Ce−μ t for any t ≥ s with positive constants C and μ . Thus, under the conditions of Theorem 4.1.7, the long-time behavior of solutions to (4.1.1) and (4.1.2) can be described completely by solutions to problem (4.1.31). Moreover, owing to the relation in (4.1.33), every limiting regime of the reduced system in (4.1.31) is realized in the coupled system in (4.1.1) and (4.1.2). We have master–slave synchronization.
4.1.5 Distance Between Random and Deterministic Manifolds Theorem 4.1.7 can also be applied to the deterministic version of problem (4.1.1), (4.1.2): Ut + A˜ 1U = F1 (U,V ) Vt + A˜ 2V = F2 (U,V )
in X1 , in X2 .
286
4 Master–Slave Synchronization in Random Systems
In this case, Theorem 4.1.7 gives us the existence of (deterministic) invariant exponentially attracting manifold M det in the space X of the form $ # M det = (mdet (V ),V ) : V ∈ X2 , where mdet : X2 → X1 is a globally Lipschitz mapping. Our goal in this section is to estimate the mean value distance between the deterministic M det and random M (ω ) manifolds. Theorem 4.1.13. There exist a positive constant C such that ! " E
sup m(·,V ) − mdet (V )2X1
V ∈X2
≤ C (tr Q1 + tr Q2 ) ,
(4.1.35)
where Q1 , Q2 are the covariance operators of ω1 , ω2 . Thus, the random manifold M (ω ) is close in the average to its deterministic counterpart when tr Q1 + tr Q2 becomes small. Proof. It follows from the definition (see (4.1.22)) of the functions m and mdet that m(ω ,V ) − mdet (V ) =
0 −∞
S1 (τ ) F1 (Y st (τ )) − F1 (Y det (τ )) d τ + Z1 (ω ),
where Y st (t) and Y det (t) are defined on the semi-axis (−∞, 0] and solve the equations det Y st (t) = IPV [F(Y st ), ω ](t) and Y det (t) = Idet )](t), PV [F(Y
(4.1.36)
where IPV and Idet PV are defined as in (4.1.19). Using the same method as in the proof of the relation in (4.1.30), we can conclude that m(ω ,V ) − mdet (V )X1 ≤ Z1 (ω )X1 + a1 |Y st −Y det |X ,
(4.1.37)
where a1 is a deterministic constant. By (4.1.36) we have that |Y st −Y det |X ≤ |TPY0 [Y st , ω ](·) − TPY0 [Y det , ω ](·)|X + |Z(·)|X , where TPY0 [V, ω ](t) is the same as in (4.1.18) and Z is given by (4.1.20). Thus, by Proposition 4.1.9, we have that |Y st −Y det |X ≤ q|Z(·)|X ,
q :=
1 . 1 − κ (μ )
Therefore, using (4.1.37), we obtain the estimate m(·,V ) − mdet (V )2X1 ≤ 2Z1 (ω )2X1 + b1 |Z(·)|2X
(4.1.38)
4.1 General Idea of the Random Invariant Manifold Method
287
for an appropriate constant b1 . It easily follows from the definition of Z (see (4.1.20)) that EZ1 (ω )2X1 ≤ C1 · tr Q1
and
E|Z(·)|2X ≤ C2 (tr Q1 + tr Q2 ) .
(4.1.39)
Therefore, (4.1.35) follows from (4.1.38) and (4.1.39).
4.1.6 Applications As applications of Theorem 4.1.7 we consider the models present in Sect. 2.2 perturbed by additive white noises. Example 4.1.14 (Coupled Parabolic–Hyperbolic System). Let O be a bounded domain in Rd , Γ the C2 smooth boundary. Let {ai j }di, j=1 , {bi j }di, j=1 be a symmetric matrix of L∞ functions such that c0 |ξ |2 ≤
d
∑
ai j (x)ξ j ξi ≤ c1 |ξ |2 ,
i, j=1
c0 |ξ |2 ≤
d
∑
bi j (x)ξ j ξi ≤ c1 |ξ |2 ,
ξ = (ξ1 , . . . , ξd ) ∈ Rd ,
i, j=1
for some positive constants c0 , c1 and almost all x ∈ O. Let Γ0 be a measurable subset of Γ (Γ0 = 0/ or Γ0 = Γ are allowed) and Γ1 = Γ \ Γ0 . Let a0 be a positive parameter. We consider the following coupled system consisting of the parabolic problem du −
d
∑
∂i [ai j (x)∂ j u] + a0 u dt = f1 (u, w, wt )dt + d ω1 ,
i, j=1
u = 0 on Γ0 ,
(4.1.40)
d
∑
ni ai j (x)∂ j u + aΓ0 (x)u = 0
on Γ1 ,
i, j=1
where n = (n1 , . . . , nd ) is the outer normal vector of Γ and aΓ0 is a positive function in L∞ (Γ1 ). We then have the hyperbolic one ˜ dt = f2 (u, w, wt )dt + d ω2 , dwt + Bw
(4.1.41)
where w ∈ Rm is a vector function (m ≥ 1) and B˜ is a uniformly elliptic operator of the form B˜ = −
d
∑
i, j=1
∂i [bi j (x)∂ j ·] + b0
(4.1.42)
288
4 Master–Slave Synchronization in Random Systems
subjected to the Dirichlet boundary conditions and b0 is a positive parameter. We ˜ > 0. The functions assume that λB := inf spec (B) f1 : R2m+1 → R and
f2 : R2m+1 → Rm
are zero at zero, globally Lipschitz functions and possess the properties: 1/2 | f1 (u, ζ ) − f1 (u∗ , ζ ∗ )| ≤ l1 |u − u∗ |2 + |ζ − ζ ∗ |2R2m for all u, u∗ ∈ R and ζ , ζ ∗ ∈ R2m ; and 1/2 | f2 (u, ζ ) − f2 (u∗ , ζ ∗ )|Rm ≤ l2 |u − u∗ |2 + |ζ − ζ ∗ |2R2m
(4.1.43)
for all u, u∗ ∈ R and ζ , ζ ∗ ∈ R2m . As was already mentioned, coupled models such as (4.1.40) and (4.1.41) arise in the study of wave phenomena, which are heat generating or temperature related. The model in (4.1.40) and (4.1.41) deals with random media. We note that a description of wave propagation phenomena in these media usually based on the study of stochastically (or randomly) perturbed hyperbolic PDE (see, for example, Sobczyk [157] and the references therein). If these wave phenomena are temperature dependent or heat generating, then the hyperbolic equations are coupled with a stochastic parabolic (heat) equation (see, for example, Chow [29] or Hori [95]). In this respect, the question of how a thermal environment may influence the random long-time dynamics of the system arises. As far as we know, there are not many publications on the dynamics of coupled parabolic–hyperbolic stochastic partial differential equations, although stochastic parabolic and wave equations have been widely studied by many authors (see, for example, the monographs Cerrai [27], Da Prato/Zabczyk [64] and the references therein for the parabolic case, and the publications mentioned in Sect. 3.4 for the wave case). We denote V = (w, wt ), U = u and rewrite the (master) equation in (4.1.41) as a first-order equation of the form 0 0 −id 0 V dt = dV + ˜ dt + (4.1.44) B 0 f2 (U,V ) d ω2
m in the space X2 = H01 (O) × [L2 (O)]m , where the linear part of this equation defines the operator A˜ 2 , whereas (0, f2 (u, w, wt ) defines F˜2 (U,V ). The slave system (4.1.40) can be considered in X1 = L2 (O) and has the form dU + (A˜ 1U + F1 (U,V ))dt = d ω1 ,
(4.1.45)
where A˜ 1 is the self-adjoint operator defined by the formula (A˜ 1 u, u∗ ) =
d
∑
i, j=1 O
∗
ai j ∂ j u∂i u dx + a0
O
∗
uu dx +
Γ1
aΓ0 uu∗ dΓ ,
a0 ≥ 0, aΓ0 > 0.
4.1 General Idea of the Random Invariant Manifold Method
289
The nonlinear mapping F1 is the Nemytskii operator corresponding to the function f1 (u, w, wt ). The coupled system (4.1.44) and (4.1.45) can be written as an equation in X1 × X2 , where the space X2 is endowed with the norm |B˜ 1/2 w0 |2 + |w1 |2 dx, V = (w0 , w1 ). (4.1.46) V 2X2 = O
Let S1 be the semigroup on X1 generated by A˜ 1 . In this case S1 (t) ≤ e−λ1 t , t > 0, where λ1 = inf spec A˜ 1 . As for operator A˜ 2 given by 0 −id , B˜ 0 which is the generator of the unitary semigroup S2 . Thus, we have M1 = M2 = 1, γ1 = λ1 = inf spec (A˜ 1 ) ≥ β0 , γ2 = 0 and also
% 1 L1 = l1 max 1, √ , λB
% 1 L2 = l2 max 1, √ . λB
Thus, under the gap condition (4.1.13) inf spec (A˜ 1 ) = λ1 >
l1 +
% 2 1 l2 max 1, √ λB
system (4.1.44), (4.1.45) has a random Lipschitz invariant manifold M . Hence, the equation in (4.1.40) is master–slave synchronized by (4.1.41). As in the deterministic case (see Remark 2.2.11), it is not important that O is a bounded domain and Dirichlet boundary conditions for w hold. We can consider unbounded domains and equip the differential operator in (4.1.42) with other (selfadjoint) boundary conditions. As in Sect. 2.2.5, we can also consider stochastic perturbations in degenerate cases of parabolic–hyperbolic systems. For instance, we can apply the results above to coupled parabolic PDE and ODE systems of the form du + A˜ 1 u dt = f1 (u, w)dt + d ω1 , dwt = f2 (u, w, wt )dt + d ω2 , where A˜ 1 is self-adjoint positive operator in L2 (O)
290
4 Master–Slave Synchronization in Random Systems
Example 4.1.15 (Two Coupled Hyperbolic Systems). In a smooth bounded domain O in Rd , we consider two coupled wave equations for scalar functions u and w: dut + (γ ut + B˜ 1 u)dt = f1 (u, w)dt + d ω1 , dwt + B˜ 2 w dt = f2 (u, w, wt )dt + d ω2 ,
(4.1.47)
where B˜ 1 and B˜ 2 are self-adjoint positive operators in L2 (O) generated by some uniformly elliptic differential operations with appropriate boundary conditions on ∂ O. We assume that
λB˜1 = inf spec (B˜ 1 ) > 0,
and
λB˜2 = inf spec (B˜ 2 ) > 0.
(4.1.48)
The simplest example of the operators B˜ 1 and B˜ 2 in the case of bounded sufficiently domain O is B˜ 1 = B˜ 2 = −Δ with the Dirichlet boundary conditions. We assume that γ > 0 and the functions f1 : R2 → R
and
f2 : R3 → R
are zero at zero and globally Lipschitz functions, i.e., f2 satisfies (4.1.43) with m = 1 and 1/2 | f1 (u, w) − f1 (u∗ , w∗ )| ≤ l1 |u − u∗ |2 + |w − w∗ |2 (4.1.49) for all u, u∗ , w, w∗ ∈ R. To apply Theorem 4.1.7 to system (4.1.47), we rewrite (4.1.47) as (4.1.1), (4.1.2) with U = (u, ut ) and V = (w, wt ). The corresponding phase spaces are 1/2
X1 = D(B˜ 1 ) × L2 (O) and
1/2
X2 = D(B˜ 2 ) × L2 (O).
In particular, B˜ 1 , B˜ 2 allow us to construct generators A˜ 1 , A˜ 2 of strongly continuous semigroups S1 , S2 in the usual way. We intend to use the same methods as in Sect. 2.2 and therefore we introduce a norm in X1 by the formula 1/2 U2X1 = B˜ 1 u2 + ut + ε u2 ,
(4.1.50)
where · denotes the norm in L2 (O) and the parameter ε will be chosen later. We endow the space X2 with the norm (4.1.46). By Lemma 2.2.12 we have that every solution U(t) = (u(t), ut (t)) to equation utt + γ ut + B˜ 1 u = 0 satisfies the following stability estimate U(t)2X1 ≤ e−ε t U(0)2X1
(4.1.51)
4.1 General Idea of the Random Invariant Manifold Method
for every t > 0, where ε = min
γ 3λB˜ 1 4 , 8γ
291
% . Therefore, we can apply Theorem 4.1.7
with M1 = M2 = 1, γ1 = ε /2, γ2 = 0 and also
⎧ ⎨ L1 = l1 max
⎩
⎧ ⎨
⎫ ⎬
1 1 ,, λB˜1 λB˜2 ⎭
L2 = l2 max
⎩
⎫ ⎬
1 1 ,. λB˜1 λB˜2 ⎭
Thus, under the condition ⎧ ⎫ % ⎨ 1 2 γ 3λB˜1 1 1 ⎬ min , ,l1 + l2 max > ⎩ λB 1 2 4 8γ λ˜ ⎭ B2
there exists an exponentially attracting random invariant manifold allowing us to interpret this as a master–slave synchronization. In particular, the second equation of (4.1.47) synchronizes the first equation of (4.1.47) with an exponential rate. Example 4.1.16 (Coupled Klein–Gordon–Schr¨odinger System). We consider the following stochastic version of the model in (2.2.53): dut + (γ ut − Δ u + m2 u)dt = f1 (u, w)dt + d ω1 idw + Δ w dt = f2 (u, w)dt + d ω2
in Rd ,
in Rd ,
(4.1.52)
where γ , m > 0, and κ ≥ 0. The functions f1 : R × C → R,
and
f2 : R × C → C
are zero at zero, and globally Lipschitz, i.e., 1/2 | f1 (u, w) − f1 (u∗ , w∗ )| ≤ l1 |u − u∗ |2 + |w − w∗ |2 ,
u, u∗ ∈ R, w, w∗ ∈ C; (4.1.53)
and 1/2 | f2 (u, w) − f2 (u∗ , w∗ )| ≤ l2 |u − u∗ |2 + |w − w∗ |2 ,
u, u∗ ∈ R, w, w∗ ∈ C. (4.1.54)
To apply Theorem 4.1.7, we rewrite (4.1.52) as (4.1.1), (4.1.2) with U = (u, ut ) and V = w. The corresponding phase spaces X1 = H 1 (Rd ) × L2 (Rd ),
X2 = L2C (R2 ),
292
4 Master–Slave Synchronization in Random Systems
where L2C (O) is the space of square integrable complex functions. We equip X1 2 ˜ with the # norm given $ in (4.1.50) with B1 = −Δ + m (with the natural domain) and γ 3m2 ε = min 4 , 8γ . In this case, M1 = M2 = 1, γ1 = ε /2, γ2 = 0 and also
⎧ ⎨
⎫ 1 ⎬ L1 = l1 max 1, , ⎩ λB˜1 ⎭
⎧ ⎨
⎫ 1 ⎬ L2 = l2 max 1, . ⎩ λB˜1 ⎭
Thus, under the condition ⎧ ⎫ % ⎨ 2 γ 3m2 1 1 1 ⎬ , ε = min l1 + l2 max 1, > ⎩ 2 2 4 8γ λ˜ ⎭ B1
the second equation of (4.1.52) synchronizes the first equation of (4.1.52). In conclusion, we note that we are not able to apply our main result (Theorem 4.1.7) in the case of two coupled parabolic equations for the same reason as in the deterministic case (see Remark 2.2.2).
4.2 Master–Slave Synchronization for Equations with a Random Linear Part In this section, we deal with random evolution equations. We would like to derive conditions of master–slave synchronization for such a system of evolution equations. We assume that the nonlinear part of this equation depends on the random parameter ω . More especially, we will also assume that the linear parts of these equations depend on ω . This system generates a linear RDS (see Caraballo et al. [21]). To obtain the main result we have to introduce two tools from the theory of RDS. One tool is the random fixed point theorem to find a stationary point and the other one is the Oseledets theorem, or the multiplicative ergodic theorem.
4.2.1 Preparations In the following, we consider a method to find stationary points for an RDS φ . This method can be considered as a random and dynamical version of the Banach fixed point theorem. Let C be the (in general nonseparable) Banach space of continuous bounded mappings from X2 to X1 equipped with the supremum norm · C .
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
293
Theorem 4.2.1. Consider the separable Banach spaces X1 , X2 . In addition, let G = 0/ be a set of Carath´eodory mappings: G g : Ω × X2 → X1 for ω ∈ Ω of full measure such that
ω → g(ω )C is tempered. Moreover, for ω ∈ Ω , there exists a set G(ω ) ⊂ C so that for g ∈ G the partial functions g(ω , ·) ∈ G(ω ). Assumption 4.2.2. We assume 1. We have for all t ∈ R+ that φ (t, θ−t ω , g(θ−t ω )) ∈ G for g ∈ G . 2. G is complete in the following sense. Let (gn (ω ))n∈N be a Cauchy sequence in G(ω ) ⊂ C for all ω ∈ Ω . Then, this sequence has a limit g ∈ G (with g(ω ) ∈ G(ω )). 3. Assume that there exists a multiplicative RDS2 R × Ω (t, ω ) → K(t, ω ) ∈ R+ \ {0} such that lim
t→±∞
1 log K(t, ω ) = λ < 0 t
on Ω . 4. Let φ be an RDS in the following weak sense: φ (t, ·, g(·)) ∈ G and φ (t, ω , g(ω )) ∈ G(θt ω ) for all g ∈ G and that
ω → sup φ (s, θ−s ω , g(θ−s ω ))C s∈[0,1]
is tempered. φ has the cocycle property. 5. Finally, we assume φ (t, ω , g1 ) − φ (t, ω , g2 )C ≤ K(t, ω ), g1 − g2 C g1 =g2 ∈G(ω ) sup
ω ∈ Ω.
Then, φ has a unique stationary point (random fixed point) gs ∈ G :
φ (t, ω , gs (ω )) = gs (θt ω ),
t ≥ 0, ω ∈ Ω
which is exponentially attracting and pullback exponentially attracting. 2 A multiplicative RDS on R \ {0} is given by an RDS with composition given by the multipli+ cation and the identity is the number 1.
294
4 Master–Slave Synchronization in Random Systems
Proof. Owing to K(−t, ω )K(t, θ−t ω ) = 1 for t ≥ 0, we can conclude lim
t→±∞
1 log K(t, θ−t ω ) = λ . t
We have for g1 , g2 ∈ G the inequalities: φ (t,θ−t ω , g1 (θ−t ω )) − φ (t, θ−t ω , g2 (θ−t ω ))C ≤elog K(t,θ−t ω ) g1 (θ−t ω ) − g2 (θ−t ω )C ≤e
1 λt 2
(4.2.1)
g1 (θ−t ω ) − g2 (θ−t ω )C < ε .
for all t > t0 (ε , ω ). Hence, we have for sufficiently large i ∈ N and g ∈ G φ (i,θ−i ω , g(θ−i ω )) − φ (i + 1, θ−i−1 ω , g(θ−i−1 ω ))C ≤ e 2 λ i g(θ−i ω ) − φ (1, θ−i−1 ω , g(θ−i−1 ω ))C . 1
(4.2.2)
Consider now for 0 ≤ t ≤ t1 φ (t,θ−t ω , g(θ−t ω )) − φ (t1 , θ−t1 ω , g(θ−t1 ω ))C ≤elog K(t
,θ−t ω )
φ (t − t , θ−t+t θ−t ω , g(θ−t+t θ−t ω ))
− φ (t1 − t , θ−t1 +t θ−t ω , g(θ−t1 +t θ−t ω ))C log K(t ,θ−t ω ) ≤e φ (t − t , θ−t+t θ−t ω , g(θ−t+t θ−t ω )) − g(θ−t ω )C t1 −t −1
+
∑
φ (i, θ−t
−i ω , g(θ−t −i ω ))
i=0
− φ (i + 1, θ−t
−i−1 ω , g(θ−t −i−1 ω ))C
+ φ (t1 − t , θ−t1 ω , g(θ−t1 ω ))
− φ (t1 − t , θ−t1 +t θ−t ω , g(θ−t1 +t θ−t ω ))C , (4.2.3) where · is the truncation operator. The last expression can be estimated by φ (t1 − t , θ−t1 ω , g(θ−t1 ω )) − φ (t1 − t , θ−t1 ω , φ (t1 − t1 , θ−t1 +t1 θ−t1 ω , g(θ−t1 +t1 θ−t1 ω )))C ≤elog K(t1
−t ,θ−t ω ) 1
×
× g(θ−t1 ω ) − φ (t1 − t1 , θ−t1 +t1 θ−t1 ω , g(θ−t1 +t1 θ−t1 ω )))C .
The product of this term and the factor exp(log K(t , θ−t ω )) converges to zero for t1 → ∞ by Assumption 4.2.2 (4), (5). Similarly, the first term on the right-hand
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
295
side of (4.2.3) converges to zero for t → ∞. For the sum, we have t1 −t −1
∑
φ (i, θ−i ω , g(θ−i ω )) − φ (i + 1, θ−i−1 ω , g(θ−i−1 ω ))C
i=0 ∞
≤ ∑ elog K(i,θ−i ω ) φ (1, θ−i−1 ω , g(θ−i−1 ω )) − g(θ−i ω )C . i=0
This sum converges by (4.2.2). In particular, we obtain that the last expression is bounded by ∞
H(ω ) := ∑ elog K(i,θ−i ω ) l(θ−i ω ),
l(ω ) := sup φ (s, θ−s ω , g(θ−s ω )) − g(ω )C . s∈[0,1
i=1
l is tempered by the conditions of the theorem. We note that H is a tempered random variable. This follows similar to the proof of Theorem 3.1.23 and the remark that log K(i, θ−i−t ω ) = log K(i + t, θ−i−t ω ) − λ (i + t) − (log K(t, θ−t ω ) − λ t) + λ i ≤λ i + ε |i + t| for large i > i0 (ε , ω ). Hence, by Assumption 4.2.2 (3) lim elog K(t
,θ−t ω )
t→∞
H(θ−t ω ) = 0.
We have that for any positive sequence (tn )n∈N tending to infinity that (φ (tn , θ−tn ω , g(θ−tn ω )))n∈N is a Cauchy sequence for any g ∈ G that has a limit gs ∈ G by our Assumption 4.2.2 (2). Taking (4.2.1) into account, we have that this limit is independent of g. gs is invariant:
φ (t, ω , gs (ω )) = lim φ (t, ω , φ (tn , θ−tn ω , g(θ−tn ω )) tn →∞
= lim φ (t + tn , θ−tn −t θt ω , g(θ−tn −t θt ω )) = gs (θt ω ). tn →∞
We then obtain the pullback convergence lim φ (t, θ−t ω , g(θ−t ω )) = gs (ω ).
t→∞
We also have the forward convergence lim φ (t, ω , g(ω )) − gs (θt ω )C = lim φ (t, ω , g(ω )) − φ (t, ω , gs (ω ))C = 0
t→∞
t→∞
because (4.2.1) holds too when we replace θ−t ω by ω by Assumption 4.2.2(5). These convergences are exponentially fast.
296
4 Master–Slave Synchronization in Random Systems
We have the following special version of this theorem. Let D be the set of closed random sets that are tempered (see Definition 3.1.15). Corollary 4.2.3. Let (Ω , F , P, θ ) be an ergodic MDS. Suppose that G ∈ D, the set of closed tempered random sets, is forward invariant. In addition, suppose that log
sup g1 =g2 ∈G(ω )
φ (t, ω , g1 ) − φ (t, ω , g2 ) ≤ x − y
t 0
k(θr ω )dr
and Ek < 0. Then, φ has a unique stationary point gs such that gs (ω ) ∈ G(ω ), which is exponentially attracting and pullback exponentially attracting for g ∈ G on a θ invariant set of full measure. Considering X2 = {0}, then the set G consists of all random variables g such that g(ω ) ∈ G(ω ). Note that X1 is separable such that we can apply Theorem 3.1.14. We can also set log K(t, ω ) =
t 0
k(θr ω )dr.
Then, by the Birkhoff ergodic theorem, there exists a set of full measure so that lim
t→∞
1 1 log K(t, ω ) = lim log K(t, θ−t ω ) = Ek < 0. t→∞ t t
We can then restrict our metric dynamical system to this θ invariant set such that we have the convergence for all ω with restrict to this new set. Another tool we need in the following is the multiplicative ergodic theorem (see Arnold [4, Chapter 3]). We consider the linear differential equation du = B(θt ω )u dt
(4.2.4)
where B is a random d × d matrix. This equation generates a linear RDS denoted by ψ with time set R (see Arnold [4] Example 3.4.15). Theorem 4.2.4. Let B ∈ Rd ⊗ Rd be a random matrix contained in L1 (Ω ) (i.e., EB < ∞), where θ is supposed to be ergodic. Then there exist a θ –invariant set Ω of full measure, nonrandom numbers p ∈ N, p ≤ d, −∞ < λ p < λ p−1 < · · · < λ1 < ∞ and d1 , · · · , d p ∈ N and random linear spaces E1 (ω ), · · · , E p (ω ) of nonrandom dimension d1 , · · · , d p such that Rd = E1 (ω ) ⊕ · · · ⊕ E p (ω ) for ω ∈ Ω . The spaces Ei are invariant with respect to the RDS ψ :
ψ (t, ω )Ei (ω ) = Ei (θt ω ) for i = 1, · · · , p,
ω ∈ Ω ,
t ∈R
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
297
and lim
t→±∞
log ψ (t, ω )x = λi t
if and only if x ∈ Ei (ω ) \ {0}
for ω ∈ Ω . The spaces Ei depend measurably on ω . In particular, there exist measurable projections onto these spaces. The condition on B is sufficient for the so-called integrability condition (see Arnold [4, Example 3.4.15]). Restricting the above RDS to Ω , we have for all ω the above convergence. The random spaces Ei are called Oseledets spaces. Let us consider as an example the following two-dimensional linear random differential equation generated by a (ω ) + L(ω ) L(ω ) B(ω ) = 1 . (4.2.5) a2 (ω ) − L(ω ) −L(ω ) Lemma 4.2.5. Suppose that B ∈ L1 (Ω ). In addition, we suppose that E(a1 + a2 ) < 0
(4.2.6)
and that the gap condition a2 (ω ) − a1 (ω ) − 4L(ω ) > 0,
L(ω ) ≥ 0
(4.2.7)
holds on a θ invariant set of full measure. Then, the RDS ψ generated by (4.2.4) has two different Oseledets spaces. The Oseledets space E1 (related to the biggest Lyapunov exponent) has an angle between π /4 and π /2. The second Lyapunov exponent λ2 is negative. The angle of the Oseledets space is in the interval [0, π /4]. We consider the dynamics of the linear system projected onto the unit sphere. The angle α = arctan(W /V ) where ψ = (V,W ) satisfies the following differential equation
α = (B(θt ω )(cos α , sin α )T , (− sin α , cos α )T ) 1 = (a2 (θt ω ) − a1 (θt ω ) − 2L(θt ω )) sin(2α ) − L(θt ω ) 2
(4.2.8)
see Arnold [4, p. 278]. Consider this system for t ≥ 0. Suppose that α = π /2. Then, α ≤ 0. By the condition in (4.2.7) for α = π /4 we have α ≥ 0. Hence, the sector [π /4, π /2] is invariant such that there is a stationary solution αˆ (θt ω ) to (4.2.8) representing an Oseledets space. In a similar manner for t ≤ 0, we see that the interval [0, π /4] is invariant, containing a solution αˇ (θt ω ) to (4.2.8), representing another Oseledets space. αˇ = αˆ = π /4 is excluded by (4.2.7). Since there are at most two Oseledets spaces, each of these intervals contain exactly one Oseledets space. By
298
4 Master–Slave Synchronization in Random Systems
Birkhoff’s and Liouville’s theorem (Arnold [4, Theorem 2.2.2]), we have that 1 1 λ1 + λ2 = lim log |detψ (t, ω )| = lim t→±∞ t t→±∞ t
t 0
tr B(θτ ω )d τ = E(a1 + a2 ) < 0
such that λ2 is negative. It follows that αt tends to αˇ (θt ω ) for t → −∞ and hence αt tends to αˆ (θt ω ) for t → ∞. Let now ψ be the linear RDS ψ generated by the random matrix B in (4.2.5). By the invariance of the Oseledets spaces we can define linear one-dimensional RDS u1 , u2 describing the dynamics along E1 , E2 respectively. We then have u1 (0, ω ) = 1, u2 (0, ω ) = 1. In particular, we have lim
t→±∞
1 log ui (t, ω ) = λi . t
(4.2.9)
Let (ei1 (ω ), ei2 (ω )) be a vector of length one describing the one-dimensional Oseledets space Ei (ω ). Then, the RDS ψ can be presented as follows:
ψ (t, ω ) =
−1 u1 (t, ω )e11 (θt ω ) u2 (t, ω )e21 (θt ω ) e11 (ω ) e21 (ω ) u1 (t, ω )e12 (θt ω ) u2 (t, ω )e22 (θt ω ) e12 (ω ) e22 (ω )
(4.2.10)
Indeed, the matrix is formed by the unit vectors is non singular on a θ invariant set of full measure, which follows by Theorem 4.2.4.
4.2.2 The Random Evolution Equation Let (Ω , F , P, θ ) be an ergodic MDS. We consider the random linear evolution equation du + A(θt ω )u = 0. dt
(4.2.11)
We would like to formulate conditions on A(ω ) such that these operators generate an RDS U of linear continuous mappings in some separable Hilbert space X: U(t + τ , ω ) = U(t, θτ ω )U(τ , ω ),
t, τ ≥ 0,
U(0, ω ) = idX .
The operators A(ω ) could be unbounded. We assume that for every ω the operator A(ω ) is the generator of a strongly continuous analytic semigroup with the same domain DA ⊂ X. Conditions allowing us to conclude the existence of an RDS can be found in Amann [3]. To apply the statement of these results we denote A(θt ω ) by Aω (t). For the following, we set JΔ = {(t, s) ∈ R × R, s ≤ t}
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
299
and let H (DA , X) ⊂ L(DA , X) be the set of generators of strongly continuous analytic semigroups on X with domain DA , where DA is densely and continuously embedded in X. These Hilbert spaces are assumed to be separable. Let J be a nonsinglepoint interval. Cρ (J,Y ), ρ ∈ (0, 1) for a subset Y of a Banach space Y denotes the set of all continuous functions from J to Y such that for every t ∈ J there exists a neighborhood such that the restriction of a function space to this neighborhood has a finite ρ –H¨older norm. In particular, functions from this set are H¨older continuous on compact intervals. We call an operator A(ω ) ∈ L(DA , X) strongly measurable if for every x ∈ DA the mapping ω → A(ω )x ∈ X is measurable. Theorem 4.2.6. Assume that for every ω ∈ Ω , t ≥ 0 we have that A(θt ω ) =: Aω (t) ∈ L(DA , X) is strongly measurable. In addition, suppose that Aω is local ρ H¨older continuous, namely Aω ∈ Cρ (R, H (DA , X)), and that for every ω ∈ Ω , the resolvent set ρ (−A(ω )) contains R+ . Then there exists a family of operators Uω ∈ C(JΔ , Ls (X)) . In addition, t → Uω (t, s)u0 solves du + Aω (t)u = 0, dt
u(s) = u0 ∈ X
on (s, ∞). Moreover, Uω has the semiflow property Uω (s, s) = idX ,
Uω (t, s) = Uω (t, τ )Uω (τ , s),
s ≤ τ ≤ t.
Finally, the operators t → Uω (t, s) have an exponential growth for t → ∞ and any s ∈ R. Here, Ls (X) denotes the space of linear continuous operators equipped with the strong operator convergence. For a presentation of these results, we refer the reader to Amann [3] (Theorem 4.4.1 and in particular Corollary 4.4.2). We define U(t, ω ) := Uω (t, 0). Then these operators have the cocycle property, since Aω (t) = Aθs ω (t − s). We have to deal with the measurability of U to obtain an RDS. For this purpose, let us consider the Yoshida approximations of A(ω ) given by Aε (ω ) = A(ω )(id + ε A(ω ))−1 ,
ε > 0,
which is an operator in L(X). For the measurability of this kind of operator we refer the reader to Skorokhod [155]. In particular, we have that for y from the Hilbert space X
ω → (id + ε A(ω ))−1 y
300
4 Master–Slave Synchronization in Random Systems
is measurable and since B(DA ) = B(X) ∩ DA and since ω → A(ω ) is strongly measurable in L(DA , X), we obtain the strong measurability of the resolvent. Studying the equation ε
t
u (t) + 0
Aε (θr ω )uε (r)dr = x ∈ X
gives us the solution U ε (t, ω )x. Then, we have the measurability of ω → U ε (t, ω )x ∈ B(X) by Picard iteration (see Pazy [127, p. 127]). Indeed, by Amann [3, Section II.6.2], the mapping
τ → Aε (θτ ω ) is H¨older continuous and hence bounded on compact intervals. Now we know from Amann [3, p. 75] that for all t, ω , x we have that lim U ε (t, ω )x −U(t, ω )x = 0.
ε →0
For fixed x ∈ X and fixed ω we have that the mapping R+ t → U(t, ω )x ∈ X is continuous by 4.2.6. Then, we obtain that (t, ω ) → U(t, ω )x is measurable where we apply Lemma 3.1.2. On the other hand, X x → U(t, ω )x ∈ X is continuous, and applying Lemma 3.1.2 again, we have the desired measurability; thus, the U is a linear RDS on X. Remark 4.2.7. We can assume in addition more general conditions for the resolvent set of A. In particular, we can assume that we have a c > 0 such that for every ω the resolvent set of A(ω ) denoted by ρ (−A(ω )) is contained in the interval (c, ∞). Then, following Amann [3, Section II.6], we obtain the existence of an RDS considered above. We can also derive an exponential growth estimate for U(t, ω ) (see Amann [3, p. 68]). We consider the following example. Let X = L2 (O) where O is a bounded domain with sufficiently smooth boundary in Rd . We consider the partial differential operator A(ω , x, D) = − ∑ Dα (aαβ (ω , x)Dβ )3 |α |=|β |=1
3
α , β are multi-indices.
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
301
satisfying homogeneous boundary conditions u = 0 on ∂ O. The domain of this operator is H 2 (O) ∩ H01 (O). In addition, we assume that A is uniformly strongly elliptic. There exists a nonrandom constant K1 > 0 such that for all ω ∈ Ω
∑
|α |=|β |=1
aαβ (ω , x)ξ α ξ β ≥ K1 |ξ |2
¯ for x ∈ O,
ξ ∈ Rd .
In addition, we assume that the coefficients of A(ω , x, D) are random such that ¯ and there exists a K2 ≥ 0 such that aαβ (θt ω , ·) ∈ C1 (O) |Dα aαβ (θt ω , x) − Dα aαβ (θs ω , x)| ≤ K2 |t − s|ρ
¯ |α | = 0, 1, |t − s| ≤ 1. for all x ∈ O,
Then, A(ω , x, D) generates a continuous RDS U on X We now study the nonlinear evolution equation du + A(θt ω )u = F(θt ω , u), dt
u(0) = u0 ∈ X.
(4.2.12)
The interpretation of this equation is in a mild sense. Let U be the linear RDS on X generated by A. The random process u ∈ C([0, T ], X) is called a mild solution to (4.2.12) if u(t) = U(t, ω )u0 +
t 0
U(t − r, θr ω )F(θr ω ,U(r))dr
(4.2.13)
for t ∈ [0, T ]. Equation (4.2.13) generates under particular conditions on A, F an RDS. In particular, we have the following theorem. Theorem 4.2.8. Suppose that A(ω ) ∈ H (E1 , E0 ) and satisfies the H¨older condition from Theorem 4.2.6. In addition, assume that F is a Carath´eodory mapping on X such that F(ω , u1 ) − F(ω , u2 )X ≤ L(ω )u1 − u2 X for ω ∈ Ω , u1 , u2 ∈ X and τ → L(θτ ω ) is integrable on any compact interval. Then, (4.2.13) has a unique solution generating an RDS on X. The proof is straightforward. According to the Banach fixed point theorem we can derive a unique local solution. By the Gronwall lemma and the exponential growth condition on U, we can derive the existence of a global solution on any interval [0, T ]. We obtain the cocycle property from the fact that U is a linear RDS and the concatenation property. The measurability follows by the fact that the solution can be constructed by Picard iteration. Details of the existence of such an RDS generated by the above equation can be found in Caraballo et al. [21] and Chueshov/Schmalfuss [53]. In the following, we will deal with a system of coupled equations
302
4 Master–Slave Synchronization in Random Systems
du1 ˜ +A1 (θt ω ) = F1 (θt ω , u1 , u2 ) dt du2 ˜ +A2 (θt ω ) = F2 (θt ω , u1 , u2 ) dt
(4.2.14)
in X = X1 × X2 . Introducing the operators A˜ 1 (ω ) 0 F1 (ω , u1 , u2 ) A(ω ) = ω , u , u ) = , F( 1 2 F2 (ω , u1 , u2 ) 0 A˜ 2 (ω ) with u = (u1 , u2 ) ∈ X1 × X2 , we can consider this system in the sense of (4.2.12), where the coefficients A, F satisfy the conditions formulated in Theorem 4.2.8. In particular, A generates the linear RDS U. We consider the projections Q, P with respect to X1 , X2 of norm one. For the following we can assume the exponential dichotomy condition. There exist continuous projections Q, P related to the splitting of the phase space X = X1 × X2 commuting with U: QU(t, ω ) = U(t, ω )Q = U1 (t, ω ),
PU(t, ω ) = U(t, ω )P = U2 (t, ω ).
In particular U1 and U2 define linear RDS on X1 and X2 respectively. We consider a U2 (t, ω ) = U(t, ω )P, which is defined for t ∈ R. More precisely, U2 is a linear RDS with time set R. In particular, one could assume that A2 is a finite-dimensional operator defined on the finite-dimensional space X2 . We assume the existence of random variables a1 , a2 ∈ L1 such that t
U1 (t, ω ) ≤ e
0 a1 (θs ω )ds
U2 (t, ω ) ≤ e
0 a2 (θs ω )ds
t
for t ≥ 0 for t ≤ 0.
(4.2.15)
It follows from assumptions of Lemma 4.2.5 that a2 (ω ) > a1 (ω ). We now consider the complete non-linear equation (4.2.14). Instead of the above assumption on F, let us assume similarly F1 (ω , u1 ) − F1 (ω , u2 ) ≤ L(ω )u1 − u2 , F2 (ω , u1 ) − F2 (ω , u2 ) ≤ L(ω )u1 − u2
(4.2.16)
for any ω ∈ Ω and u1 , u2 ∈ X where the Lipschitz constant L(ω ) depends on ω such that b L(θs ω )ds < ∞ for − ∞ < a < b < ∞. a
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
303
For our consideration, we also need that sup F1 (ω , x) ≤ f1 (ω ),
sup F2 (ω , x) ≤ f2 (ω )
x∈X
(4.2.17)
x∈X
where f1 , f2 are tempered random variables.
4.2.3 The Random Graph Transform Let X1 , X2 be two separable Hilbert spaces where X2 is supposed to be finite dimensional. We denote by C, the Banach space of continuous mappings from X2 to X1 equipped with the supremum norm. For X = X1 × X2 let Cb0,1 (X2 , X1 ) be the space of uniformly Lipschitz continuous mappings for X2 into X1 as a subset of C : with seminorm γ Lip =
γ (x1+ ) − γ (x2+ ) < ∞. x1+ − x2+ x+ =x+ ∈X2 sup
1
2
The random graph transform is generated by the following system of equations v(t) =U1 (t, ω )γ (w(0)) +
t
w(t) =U2 (t − T, θT ω )y+ −
0
U1 (t − s, θs ω )F1 (θs ω , v(s), w(s))ds
T t
(4.2.18) U2 (t − s, θs ω )F2 (θs ω , v(s), w(s))ds
for t ∈ [0, T ], γ ∈ Cb0,1 (X2 , X1 ), y+ ∈ X2 . In this subsection, the coefficients satisfy the assumptions of the last subsection. An interpretation of the general graph transform is given in Sect. 2.3.2. Assuming for a while that (4.2.18) has a unique solution (v(·), w(·)), we denote by
Φ (T, ω , γ )(y+ ) := v(T ) and Ξ (T, θT ω , γ )(y+ ) := w(0). In particular, we will call Φ the random graph transform. First, we investigate if there exists a unique solution to (4.2.18). Lemma 4.2.9. For every ω ∈ Ω , y+ ∈ X2 and γ ∈ Cb0,1 (X2 , X1 ) there exists a T (ω , γ Lip ) > 0 such that (4.2.18) has a unique solution (v, w) ∈ C([0, T ], X1 × X2 ) for 0 ≤ T ≤ T (ω , γ Lip ). In particular, the mapping y+ → v(T, ω , γ , y+ ) is bounded and continuous. For fixed C > 0, the mapping ω → T (ω ,C) is a positive random variable. The mapping T → v(T, ω , γ , y+ ) is continuous.
304
4 Master–Slave Synchronization in Random Systems
Proof. Let us consider the operator C([0, T ], X1 × X2 ) (v, ¯ w) ¯ → Tγ ,T,ω ,y+ (v, ¯ w) ¯ =: (v, ˆ w) ˆ defined by U1 (t, ω )γ (w(0)) ˆ + 0t U1 (t − s, θs ω )F1 (θs ω , v(s), ¯ w(s))ds ¯ , ¯ w(s))ds ¯ U2 (t − T, θT ω )y+ − tT U2 (t − s, θs ω )F2 (θs ω , v(s),
(4.2.19)
t ∈ [0, T ]. (To evaluate these expressions we have to calculate at first the second term and then we have to plug this expression into the first one.) According to the Lipschitz continuity of F, we obtain for a(ω ) := |a1 (ω )| + |a2 (ω )| sup (wˆ 1 (t) − wˆ 2 (t) + vˆ1 (t) − vˆ2 (t)) t∈[0,T ]
T ≤ e 0 a(θs ω )ds
T
T
L(θs ω )ds(1 + γ Lip e 0 a(θs ω )ds ) 0 T T L(θs ω )ds sup (w¯ 1 (t) − w¯ 2 (t) + v¯1 (t) − v¯2 (t)) + e 0 a(θs ω )ds 0
t∈[0,T ]
=:k(T, ω , γ Lip ) sup (w¯ 1 (t) − w¯ 2 (t) + v¯1 (t) − v¯2 (t)). t∈[0,T ]
(4.2.20) If T is given sufficiently small, then we see that Tγ ,T,ω ,y+ is a contraction where the contraction constant is independent of y+ . This contraction constant k can also be chosen independently of γ if γ Lip is uniformly bounded such that we can write for this constant k(T, ω , γ Lip ). The mapping Tγ ,T,ω ,y+ maps the complete space C([0, T ], X2 × X1 ) into itself. Hence, there exists a unique fixed point (v, w). By the independence of the contraction constant of y+ we can see by the parameter version of the Banach fixed point theorem (see Zeidler [169, Chapter 1]), that y+ → v(T, ω , γ )(y+ ) is continuous. Similarly, we can find the continuity of v with respect to T . We define for positive C > 0 the expression T (ω ,C) by 1 k(T (ω ,C), ω ,C) = . 2
(4.2.21)
Since T → k(T, ω ,C) is strongly increasing (assuming that L(ω ) is chosen positive), it follows 1 {ω ∈ Ω : T (ω ,C) ≤ t} = {ω ∈ Ω : k(t, ω ,C) ≥ } ∈ F 2 such that ω → T (ω ,C) is a random variable. The following theorem is crucial to find random invariant manifolds.
(4.2.22)
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
305
Theorem 4.2.10. Let γ ∗ (ω , ·) ∈ Cb0,1 (X2 , X1 ) where ω → γ ∗ (ω , y+ ) is measurable and satisfies the property
Φ (T, ω , γ ∗ (ω ))(y+ ) = γ ∗ (θT ω , y+ ) for ω ∈ Ω , y+ ∈ X2 provided that Φ and Ξ are well defined by (4.2.18). Then, the graph of γ ∗ is a random invariant Lipschitz manifold for the RDS φ generated by (4.2.14). Proof. Let M be the invariant manifold given by the graph of γ ∗ : M(ω ) = {x+ + γ ∗ (ω , x+ ) : x+ ∈ X2 }. By the definition of Ξ , we can conclude
Ξ (T, θT ω , γ ∗ (ω ))(y+ ) = x+
if and only if Pφ (T, ω , x+ + γ ∗ (ω , x+ )) = y+ .
Then, the construction of the graph transform allows us to conclude
φ (T, ω , x+ + γ ∗ (ω , x+ )) = Pφ (T, ω , x+ + γ ∗ (ω , x+ )) + Qφ (T, ω , x+ + γ ∗ (ω , x+ )) =y+ + Qφ (T, ω , Ξ (T, θT ω , γ ∗ (ω ))(y+ ) + γ ∗ (ω , Ξ (T, θT ω , γ ∗ (ω ))(y+ ))) =y+ + Φ (T, ω , γ ∗ (ω ))(y+ ) = y+ + γ ∗ (θT ω , y+ ) ∈ M(θT ω ).
Now we show the local cocycle property of the graph transform. Lemma 4.2.11. For γ ∈ Cb0,1 (X2 , X1 ), we assume that Φ (T1 , ω , γ ) =: μ ∈ Cb0,1 (X2 , X1 ) exists. In addition, we suppose that Φ (T2 , θT1 ω , μ ) ∈ Cb0,1 (X2 , X1 ) exists. Then, the following property holds
Φ (T1 + T2 , ω , γ ) = Φ (T2 , θT1 ω , Φ (T1 , ω , γ )). Proof. Let (v1 , w1 ) = (v1 (t, ω , γ , y+ ), w1 (t, ω , γ , y+ )), 0 ≤ t ≤ T1 such that
μ (·) := v1 (T1 , ω , γ , ·) = Φ (T1 , ω , γ )(·) and (v2 , w2 ) = (v2 (t, θT1 ω , μ , z+ ), w2 (t, θT1 ω , μ , z+ )) for t ∈ [0, T2 ]. We define
v1 (t, ω , γ , w2 (0, θT1 ω , μ , z+ )) : t ∈ [0, T1 ] v(t, ω , γ , z ) = 2 + : t ∈ (T1 , T1 + T2 ] v (t − T1 , θT1 ω , μ , z ) 1 2 + w (t, ω , γ , w (0, θT1 ω , μ , z )) : t ∈ [0, T1 ] w(t, ω , γ , z+ ) = . : t ∈ (T1 , T1 + T2 ] w2 (t − T1 , θT1 ω , μ , z+ ) +
(4.2.23)
306
4 Master–Slave Synchronization in Random Systems
We are going to show that these functions (v, w) satisfy (4.2.18) on [0, T1 + T2 ]. For t ∈ [0, T1 ], we have by the cocycle property of the linear RDS U2 U2 (t−T1 , θT1 ω )U2 (−T2 , θT1 +T2 ω )z+ +U2 (t − T1 , θT1 ω ) t
+ T1
0 T2
U2 (−s, θT1 +s ω )F2 (θT1 +s ω , v2 , w2 )ds
U2 (t − s, θs ω )F2 (θs ω , v1 , w1 )ds
=U2 (t − T1 − T2 , θT1 +T2 ω )z + +
t
+ T1
T1 T1 +T2
U2 (t − s, θs ω )F2 (θs ω , v, w)ds
U2 (t − s, θs ω )F2 (θs ω , v, w)ds
=U2 (t − T1 − T2 , θT1 +T2 ω )z+ +
t T1 +T2
U2 (t − s, θs ω )F2 (θs ω , v, w)ds.
Similarly, we get the second equation of (4.2.18). However, we need that the definition of v in (4.2.23) is continuous at T1 . By the definition of w we have that
γ (w(0, ω , γ , z+ )) = γ (w1 (0, ω , γ , w2 (0, θT1 ω , μ , z+ ))) and v1 (T1 , ω , γ , w2 (0, θT1 ω , μ , z+ )) = μ (w2 (0, θT1 ω , μ , z+ )) = v2 (0, θT1 ω , μ , z+ ). Hence, v(T1 + T2 , ω , γ , z+ ) =U1 (T1 + T2 , ω )γ (w(0, ω , γ , z+ )) +
T1 +T2 0
U1 (T1 + T2 − s, θs ω )F1 (θs ω , v, w)ds
such that we have the local cocycle property for Φ . We now consider the two-dimensional random linear differential equation (4.2.4) generated by the matrix (4.2.5). The dynamics of this equation can be described by the multiplicative ergodic theorem 4.2.4. In particular, we assume that the assumption of Lemma 4.2.5 holds. The Oseledets spaces have the structure mentioned in this lemma. Hence, we can represent the Oseledets spaces E1 (ω ), E2 (ω ) by unit vectors e (ω ) e (ω ) e1 (ω ) = 11 , e2 (ω ) = 21 e12 (ω ) e22 (ω ) with positive elements because the angles for these spaces are considered in (0, π /2) for √ a θ invariant set of full measure. In particular, we can suppose that e12 (ω ) ∈ [1/ 2, 1] for a θ invariant set of full measure. On E1 , E2 , we have a linear onedimensional RDS u1 , u2 such that lim
t→±∞
1 log ui (t, ω ) = λi t
for i = 1, 2.
(4.2.24)
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
307
By Theorem 4.2.4, we can suppose that ui (t, ω ) > 0. ui (t) never crosses zero. The system (4.2.4) can be interpreted as a version of the system in (4.2.12) with phase space X = R2 = R × R if we set a1 (ω ) 0 L(ω )W + L(ω )V A(ω ) = , F(ω ,W,V ) = . 0 a2 (ω ) −L(ω )W − L(ω )V The operator F(ω , ·) is Lipschitz continuous. The Lipschitz constants are the same as in (4.2.16). We now formulate the graph transform for the problem (4.2.4). Since this is a linear problem, we transform instead of Lipschitz continuous graphs the graphs of linear mappings from R into R characterized by elements Γ from R. We consider the problem (4.2.4) with boundary conditions V (0) = Γ W (0) +C ∈ R,
W (T ) = Y ∈ R.
(4.2.25)
We obtain the graph transform generated by (4.2.4) setting C = 0 denoted by Ψ . Our knowledge of the Oseledets spaces for (4.2.4) allows us to determine a random invariant linear manifold for (4.2.4), (4.2.25) without the random graph transform.
Γ ∗ (ω ) := e11 (ω )/e12 (ω ) which is well defined because e12 (ω ) > 0. Lemma 4.2.12. The graph of R X + → Γ ∗ (ω )X + ∈ R defines a random invariant manifold for (4.2.4), (4.2.25) in the phase space R2 . Proof. According to the invariance of the space E1 , we have that ∗ ∗ u1 (T, ω ) Γ (ω ) Γ ( θT ω ) e12 (θT ω ) ψ (T, ω ) = . 1 1 e12 (ω ) We multiply the last equation by e12 (ω ) Y + = W (0, ω , Γ ∗ (ω ))Y + =: W (0) = X + . u1 (T, ω )e12 (θT ω ) Lemma 4.2.13. (i) The system (4.2.4), (4.2.25) has a unique solution on [0, T (ω , Γ )] where T (ω , Γ ) is defined in Lemma 4.2.9. (ii) Suppose that 0 ≤ Γ1 < Γ ∗ (ω ) and W (t, ω , Γ ), V (t, ω , Γ ) are solutions of (4.2.4), (4.2.25) for Γ = Γ1 , Γ = Γ ∗ (ω ) on [0, T (ω , Γ1 )]. Then, W (t, ω , Γ1 ) ≤ W (t, ω , Γ ∗ (ω )),
V (t, ω , Γ1 ) ≤ V (t, ω , Γ ∗ (ω )).
308
4 Master–Slave Synchronization in Random Systems
(iii) Suppose that C = 0 in (4.2.25). Then, we have Φ (T, ω , γ )Lip ≤ Ψ (T, ω , γ Lip )(1) for 0 ≤ T ≤ T (ω , γ Lip ). Proof. (i) Applying exactly the same technique as in Lemma 4.2.9, we find a fixed point for the operator defined in (4.2.19). We obtain the same contraction constant k(T, ω , Γ ) as in the proof of Lemma 4.2.9 if we replace γ Lip by Γ . The constant C does not have any influence on this contraction constant. Hence, we can define T (ω , ·) by (4.2.22). (ii) Consider the difference Δ W = W2 −W1 , Δ V = V2 −V1 . To obtain an expression for Δ V, Δ W we can apply the ansatz in (4.2.27) below such that
Δ W (T, ω ) = 0, Y = 1, Δ V (0, ω ) = W (0, ω , Γ ∗ (ω ))Γ ∗ (ω ) −W (0, ω , Γ1 )Γ1 = Δ W (0)Γ ∗ (ω ) +W (0, ω , Γ1 )(Γ ∗ (ω ) − Γ1 ). Solving the linear system of equations for c1 , c2 , we obtain the desired estimate. For this, we have to show that W (0, ω , Γ1 ) ≥ 0 when Y = 1, C = 0. Indeed, if there is a t ∈ (0, T ) so that W (t) = 0, then (V (s),W (s)) = (0, 0) for s ∈ [t, T ] by the uniqueness of the solution of the linear system and then by the local cocycle property of the RDS given by Γ → Ψ (T, ω , Γ ) for s ∈ [0, T ], so that W (0) < 0 is not possible. (iii) We set Γ1 = γ Lip , C = 0. Then, we construct the solution of (4.2.18) and (4.2.4), (4.2.25) by successive iteration on [0, T (ω , γ Lip )] starting with V 0 (t) := γ Lip ,
v0 (t) := γ ,
W 0 (t) := 1,
w0 (t) := y+
and thus on (V i ,W i ) and (vi , wi ). To get (vi , wi ), we have to iterate the operator Tγ ,T,ω ,y+ defined in the proof of Lemma 4.2.9 and similarly for (V i ,W i ). We have for i = 0 + i vi (T, ω , γ )(y+ 1 ) − v (T, ω , γ )(y2 ) ≤ V i (T, ω , γ Lip )(1). + y+ − y 1 2
The structure of the matrix B ensures that this inequality remains true for every i ∈ N. Hence, we obtain (iii) for i → ∞. Applying the variation of the constants method to (4.2.4) with the boundary condition in (4.2.25), we obtain for the second equation t
W (t) = e
T a2 (θq ω )dq
T
Y+
t
e
r a2 (θq ω )dq
(LW (r) + LV (r))dr.
t
Now we are in a position to show the global cocycle property for the graph transform Φ restricted to particular graphs.
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
309
Lemma 4.2.14. Let G be the set of Carath´eodory mappings such that X2 x2 → g(ω , x2 ) is in C for ω ∈ Ω . Let G(ω ) := {γ ∈ C : γ (ω )Lip ≤ Γ ∗ (ω )}, where Γ ∗ is given in Lemma 4.2.12. Then, for any ω ∈ Ω , the set G is complete in the sense of Theorem 4.2.1. Proof. Let (γn (ω )) be a Cauchy sequence in G(ω ) ⊂ C. This sequence has a limit + γ (ω ). Then, we have for every y+ 1 = y2 ∈ X2 : + γn (ω , y+ 1 ) − γn (ω , y2 ) ≤ Γ ∗ (ω ). + n→∞ y+ − y 1 2
lim
This inequality remains true for the limit; hence, γ (ω )Lip ≤ Γ (ω ). Straightforwardly, (ω , x2 ) → γ (ω , x2 ) is a Carath´eodory mapping. G is not empty because it contains the mappings X2 → constant ∈ X1 . Now, we prove the forward invariance of G: Theorem 4.2.15. We have that Φ (T, ω , γ (ω )) is well defined for T ≥ 0 and
Φ (T, ω , γ (ω )) ∈ G(θT ω ). In addition, the graph transform is measurable in the following sense: (T, ω ) → Φ (T, ω , γ (ω ))(y+ ) ∈ X1 is (B(R+ ) ⊗ F , B(X1 )) measurable for fixed y+ , γ . Proof. We consider the iteration of random times introduced in (4.2.21): Ti (ω ) = T (θTi−1 (ω ) ω , Γ ∗ (θTi−1 (ω ) ω )) + T (θTi−2 (ω ) ω , Γ ∗ (θTi−2 (ω ) ω )) + · · · + T (ω , Γ ∗ (ω )). By Lemma 4.2.11, we have the cocycle property on [0, T (θTi−1 (ω ) ω , Γ ∗ (θTi−1 (ω ) ω ))]. A comparison argument according to Lemma 4.2.13 and 4.2.12, we can conclude that
Φ (T, ω , γ ) ∈ G(θT ω ), γ ∈ G(ω ) for T ∈ [0, Ti−1 (ω , Γ ∗ (ω ))] and similar on [0, T (θTi−1 (ω ) ω , Γ ∗ (θTi−1 (ω ) ω ))]. We can extend the invariance property to any of the intervals [0, Ti (ω )]. Then, we can apply Lemma 4.2.11 to that interval. We are done when we can show that limi→∞ Ti (ω ) = ∞ for all ω contained in a θ invariant set of full measure. Suppose that this limit is equal to c < ∞. According to the definition of k in (4.2.20) and T in (4.2.21) and the fact that L ∈ L1 (Ω ) and t → e11 (θt ω )/e12 (θt ω ) is continuous, it follows that
310
4 Master–Slave Synchronization in Random Systems
c 0
(|a1 (θs ω )| + |a2 (θs ω )|)ds = ∞.
(4.2.26)
On the other hand, we know from Birkhoff’s ergodic theorem that 1 t→±∞ t
t
lim
0
(|a1 (θs ω )| + |a2 (θs ω )|)ds = E(|a1 | + |a2 |) < ∞.
for almost all ω ; hence, (4.2.26) is a contradiction. Now we derive the existence of a random fixed point for the RDS Φ , which then allows us to conclude that the RDS generated by (4.2.14) has a random inertial manifold. Lemma 4.2.16. Suppose that the assumptions (4.2.6), (4.2.7) are satisfied. Then there exists a process (t, ω ) → K(t, ω ) such that Φ (T, ω , γ1 (ω )) − Φ (T, ω , γ2 (ω ))C ≤ K(T, ω )γ1 (ω ) − γ2 (ω )C for γ1 , γ2 ∈ G and K satisfies the Assumption 4.2.2 (3) of Theorem 4.2.1, where λ = λ2 is the smaller (negative) Lyapunov exponent for the system in (4.2.4). Proof. Let γ1 , γ2 ∈ G . By Theorem 4.2.15 for γ = γi , i = 1, 2 the system (4.2.18) has unique solutions (vi , wi ), i = 1, 2 on any interval [0, T ]. For the difference (Δ w, Δ v) := (v1 − v2 , w1 − w2 ), we have
Δ v(t) = U1 (t, ω )(γ1 (ω , w1 (0)) − γ2 (ω , w2 (0))) t
+ 0
U1 (t − s, θs ω )(F1 (θs ω , w1 (s) + v1 (s)) − F1 (θs ω , w2 (s) + v2 (s)))ds
Δ w(t) = −
T t
U2 (t − s, θs ω )(F2 (θs ω , w1 (s) + v1 (s)) − F2 (θs ω , w2 (s) + v2 (s)))ds.
Taking the · C -norm of these expressions, we have for the first part of the righthand side of the first equation γ1 (w1 (0)) − γ2 (w2 (0))C ≤ γ1 (w1 (0)) − γ2 (w1 (0))C + γ2 (w1 (0)) − γ2 (w2 (0))C ≤ γ1 − γ2 C + γ2 Lip Δ w(0)C . Because the Lipschitz constant of F1 , F2 is given by L, we can estimate Δ vC , Δ wC by the solution of (4.2.4) with initial conditions (4.2.25) for C = γ1 − γ2 C and γ2 (ω )Lip ≤ Γ ∗ (ω ) = e11 (ω )/e12 (ω ). This estimate is based on an argument similar to the proof of Lemma 4.2.13 (iii). Because u1 , u2 are linear RDS by the ansatz
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
e (θ ω ) e (θ ω ) V (t) + c2 u2 (t, ω ) 21 t . = c1 u1 (t, ω ) 11 t W (t) e12 (θt ω ) e22 (θt ω )
311
(4.2.27)
We determine the constants c1 , c2 in this linear system by taking the boundary conditions W (T ) = 0,
V (0) =
e11 (ω ) + γ1 − γ2 C . e12 (ω )
into account. Then, V (T, ω ) is given by e12 (ω ) D(θT ω ) γ1 (ω ) − γ2 (ω )C e12 (θT ω ) D(ω ) =K(T, ω )γ1 (ω ) − γ2 (ω )C
V (T, ω ) =u2 (T, ω )
where D(ω ) = det
(4.2.28)
e11 (ω ) e21 (ω ) . e12 (ω ) e22 (ω )
Since u2 is a multiplicative RDS, so is K. We have by Theorem 4.2.4 that D(ω ) = 0, by the definition of our Oseledets spaces D(ω ) > 0. We now study the exponential growth of V (T, θ−T ω ) for T → ∞. We just know from (4.2.9) log u2 (−T, ω ) = λ2 . T →∞ −T lim
On the other hand, the inversion formula of cocycles gives us u2 (−T, ω )u2 (T, θ−T ω ) = 1 such that lim
T →∞
log u2 (T, θ−T ω ) = λ2 . T
√ Since e12 (ω ) ∈ [1/ 2, 1] the term e12 (ω )/e12 (θ−T ω ) is well defined. Now, we obtain from (4.2.10) log |detψ (T, θ−T ω )| = log u1 (T, θ−T ω ) + log u2 (T, θ−T ω ) + log D(ω ) − log D(θ−T ω ) and by Liouville’s formula (see Arnold [4, Chapter 2]) log |detψ (T, θ−T ω )| = whereby Birkhoff’s theorem
T 0
tr B(θτ −T ω )d τ
(4.2.29)
312
4 Master–Slave Synchronization in Random Systems
1 T →∞ T
T
lim
0
1 0 tr B(θτ ω )d τ T →∞ T −T −T 1 = lim tr B(θτ ω )d τ = λ1 + λ2 . T →∞ −T 0
tr B(θτ −T ω )d τ = lim
However, we can modify the MDS so that this convergence holds for all ω . Together with (4.2.29), we have that D(ω )/D(θ−T ω ) has subexponential growth for T → ∞ and similarly for D(θT ω )/D(ω ). Hence (4.2.24) gives the conclusion. Lemma 4.2.17. For γ ∈ G , the mapping
ω → sup Φ (s, θ−s ω , γ (θ−s ω ))C s∈[0,1]
is tempered. Proof. We consider the presentation of Φ by v (see (4.2.18)). Note that for γ ∈ G , the mapping
ω → sup γ (θ−s ω )C s∈[0,1]
is tempered. In addition, for the temperedness of ω → sups∈[0,1] U1 (s, θ−s ω ), we have to show according to (3.1.2) that sup sup log+ U1 (s, θ−s+q ω ) ≤ H(ω ),
EH < ∞.
q∈[0,1] s∈[0,1]
We obtain from (4.2.18) that the left-hand side of the last formula is bounded by s
sup sup q∈[0,1] s∈[0,1] 0
a1 (θτ −s+q ω )d τ ≤
1 −1
|a1 (θ−τ ω )|d τ =: H(ω ) ∈ L1 (Ω ).
Then, we find for the second term s sup sup U1 (s − τ , θτ −s+q ω )F1 (θτ −s+q ω , v, w)d τ q∈[0,1] s∈[0,1]
0
1
≤ 2e
−1 |a1 (θτ ω )|d τ
1 −1
f1 (θq ω )dq.
The last integral is tempered by Remark 3.1.8 and (4.2.17). We show that Φ has a random fixed point. Lemma 4.2.18. The random graph transform has a random exponentially attracting fixed point γ ∗ ∈ G . Proof. We are going to apply Theorem 4.2.1. We have the contraction condition of this theorem by Lemma 4.2.16 for K defined in (4.2.28) satisfying the conditions of that theorem. Theorem 4.2.15 ensures
4.2 Master–Slave Synchronization for Equations with a Random Linear Part
313
Φ (t, ω , G(ω )) ⊂ G(θt ω ).
According to this lemma we can conclude that the RDS generated by (4.2.12) has an invariant manifold with graph γ ∗ , which follows from Theorem 4.2.10. Now we are in a position to formulate the existence result and random inertial manifold for the RDS φ . Applying Lemma 4.2.17 we have the following main result: Theorem 4.2.19. Suppose that the coefficients of (4.2.14) satisfy (4.2.15), (4.2.16), (4.2.17), where a1 , a2 , L satisfy (4.2.6) and (4.2.7). Then, the RDS generated by (4.2.14) has a random inertial manifold M given by the graph of m := γ ∗ : Ω × X2 → X1 . In particular, we have master–slave synchronization (4.1.3). Proof. We have stated that the graph transform defines a graph of an invariant manifold for the RDS φ . We have to show that this manifold is exponentially attracting. Let ω → X(ω ) be a (tempered) random variable in X. Let us define the graph γX (ω , x+ ) := QX(ω ), which is contained in G . We set for every x+ ∈ X2 y+ = y+ (T, ω , x+ ) := Pφ (T, ω , x+ + QX(ω )),
Y (T, ω ) := y+ (T, ω , PX(ω ))
such that Qφ (T, ω , QX(ω ) + PX(ω )) = Φ (T, ω , γX )(Y (T, ω )). Notice that the mapping Φ and Ξ exist for any γ ∈ G . On the other hand, we have Qφ (T, ω , QX(ω ) + PX(ω )) − m(θT ω ,Y (T, ω )) ≤ Φ (T, ω , γX ) − m(θT ω )C where the right-hand side tends toward zero exponentially fast by Lemma 4.2.18. This formula is also true for X(ω ) := x0 ∈ X (see Schmalfuss [148]). In particular, we consider the graph {Qx0 + x+ , x+ ∈ X2 } of the Lipschitz continuous mapping
γx0 (x+ ) = Qx0 . + For every T > 0, we consider y+ T := y (T, ω , Px0 ) := Pφ (T, ω , x0 ). Then, we set
xT+ := Ξ (T, θT ω , m(ω ))(y+ T)
314
4 Master–Slave Synchronization in Random Systems
such that Pφ (T, ω , m(ω , xT+ ) + xT+ ) = y+ T. Now, we consider Qφ (T, ω , x0 ) − m(θT ω , Pφ (T, ω , x0 )) ≤ Qφ (T, ω , x0 ) − Qφ (T, ω , m(ω , xT+ ) + xT+ ) + m(θT ω , Pφ (T, ω , m(ω , xT+ ) + xT+ )) − m(θT ω , Pφ (T, ω , x0 ))
by the invariance of the manifold M given by the graph of m. The last term on the right-hand side of the last inequality can be written as + m(θT ω , y+ T ) − m(θT ω , yT ) = 0.
For the other term on the right-hand side, we have + + + Φ (T, ω , , γx0 )(y+ T ) − Φ (T, ω , m(ω , ·)(yT ) = Φ (T, ω , , γx0 )(yT ) − m(θT ω , yT ),
which converges to zero exponentially fast by Lemma 4.2.18.
References
1. R.A. Adams, Sobolev Spaces. Pure and Applied Mathematics, vol. 65 (Academic [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York/London, 1975). MR 0450957 2. V.S. Afraimovich, S.-N. Chow, J.K. Hale, Synchronization in lattices of coupled oscillators. Physica D 103(1–4), 442–451 (1997). MR 1464257. Lattice Dynamics (Paris, 1995) 3. H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I: Abstract Linear Theory. Monographs in Mathematics, vol. 89 (Birkh¨auser Boston, Inc., Boston, 1995). MR 1345385 4. L. Arnold, Random Dynamical Systems. Springer Monographs in Mathematics (Springer, Berlin, 1998). MR 1723992 5. L. Arnold, I. Chueshov, Order-preserving random dynamical systems: equilibria, attractors, applications. Dynam. Stabil. Syst. 13(3), 265–280 (1998). MR 1645467 6. J.-P. Aubin, Approximation of Elliptic Boundary-Value Problems. Pure and Applied Mathematics, vol. XXVI (Wiley-Interscience [A division of John Wiley & Sons, Inc.], New York/London/Sydney, 1972). MR 0478662 7. A.V. Babin, M.I. Vishik, Attractors of Evolution Equations. Studies in Mathematics and Its Applications, vol. 25 (North-Holland Publishing Co., Amsterdam, 1992). Translated and revised from the 1989 Russian original by Babin. MR 1156492 8. A. Balanov, N. Janson, D. Postnov, O. Sosnovtseva, Synchronization: From Simple to Complex. Springer Series in Synergetics (Springer, Berlin, 2009). MR 2467834 9. V. Barbu, G. Da Prato, The stochastic nonlinear damped wave equation. Appl. Math. Optim. 46(2–3), 125–141 (2002). Special issue dedicated to the memory of Jacques-Louis Lions. MR 1944756
© Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3
315
316
References
10. P.W. Bates, C.K.R.T. Jones, Invariant manifolds for semilinear partial differential equations. Dynamics reported, vol. 2, in Dynam. Report. Ser. Dynam. Systems Appl., vol. 2 (Wiley, Chichester, 1989), pp. 1–38. MR 1000974 11. H. Bauer, Probability Theory. De Gruyter Studies in Mathematics, vol. 23 (Walter de Gruyter & Co., Berlin, 1996). Translated from the fourth (1991) German edition by Robert B. Burckel and revised by the author. MR 1385460 12. M. Bennett, M.F. Schatz, H. Rockwood, K. Wiesenfeld, Huygens’s clocks. R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 458(2019), 563–579 (2002). MR 1898084 13. A. Bensoussan, F. Flandoli, Stochastic inertial manifold. Stochastics Stochastics Rep. 53(1–2), 13–39 (1995). MR 1380488 14. H. Bessaih, M.-J. Garrido-Atienza, V. K¨opp, B. Schmalfuss, M. Yang. Synchronization, stochastic lattice equations, random dynamical systems, random inertial manifolds. Nonlinear Differ. Equ. Appl. 27(36) (2020). https://doi.org/ 10.1007/s00030-020-00640-0 15. P. Biler, Attractors for the system of Schr¨odinger and Klein-Gordon equations with Yukawa coupling. SIAM J. Math. Anal. 21(5), 1190–1212 (1990). MR 1062399 16. P. Boxler, Stochastische zentrumsmannigfaltigkeiten, Ph.D. thesis, Institut f¨ur Dynamische Systeme, Universit¨at Bremen, 1988 (in German) 17. O. Butkovsky, A. Kulik, M. Scheutzow, Generalized couplings and ergodic rates for SPDEs and other Markov models. Ann. Appl. Probab. 30(1), 1–39 (2020). https://doi.org/10.1214/19-AAP1485 18. O. Butkovsky, M. Scheutzow, Couplings via comparison principle and exponential ergodicity of SPDEs in the hypoelliptic setting (2019) 19. T. Caraballo, P.E. Kloeden, The persistence of synchronization under environmental noise. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 461(2059), 2257–2267 (2005). MR 2154449 20. T. Caraballo, I.D. Chueshov, P.E. Kloeden, Synchronization of a stochastic reaction-diffusion system on a thin two-layer domain. SIAM J. Math. Anal. 38(5), 1489–1507 (2006/07). MR 2286016 21. T. Caraballo, J. Duan, K. Lu, B. Schmalfuß, Invariant manifolds for random and stochastic partial differential equations. Adv. Nonlinear Stud. 10(1), 23–52 (2010). MR 2574373 22. R. Carmona, D. Nualart, Random nonlinear wave equations: propagation of singularities. Ann. Probab. 16(2), 730–751 (1988). MR 929075 23. H. Cartan, Formes diff´erentielles. Applications e´ l´ementaires au calcul des variations et a` la th´eorie des courbes et des surfaces (Hermann, Paris, 1967). MR 0231303 24. A.N. Carvalho, M.R.T. Primo, Boundary synchronization in parabolic problems with nonlinear boundary conditions. Dynam. Contin. Discrete Impuls. Syst. 7(4), 541–560 (2000). MR 1795817 25. A.N. Carvalho, H.M. Rodrigues, T. Dłotko, Upper semicontinuity of attractors and synchronization. J. Math. Anal. Appl. 220(1), 13–41 (1998). MR 1612059
References
317
26. C. Castaing, M. Valadier, Convex Analysis and Measurable Multifunctions. Lecture Notes in Mathematics, vol. 580 (Springer, Berlin/New York, 1977). MR 0467310 27. S. Cerrai, Second Order PDE’s in Finite and Infinite Dimension: A Probabilistic Approach. Lecture Notes in Mathematics, vol. 1762 (Springer, Berlin, 2001). MR 1840644 28. V.V. Chepyzhov, M.I. Vishik, Attractors for Equations of Mathematical Physics. American Mathematical Society Colloquium Publications, vol. 49 (American Mathematical Society, Providence, 2002). MR 1868930 29. P.L. Cho, Thermoelastic wave propagation in a random medium and some related problems. Int. J. Eng. Sci. 11, 953–971 (1973) 30. S.-N. Chow, W. Liu, Synchronization, stability and normal hyperbolicity. Resenhas 3(1), 139–158 (1997). Workshop on Differential Equations and Non´ linear Analysis (Aguas de Lind´oia) (1996). MR 1474307 31. S.-N. Chow, K. Lu, Invariant manifolds for flows in Banach spaces. J. Differ. Equ. 74(2), 285–317 (1988). MR 952900 32. I.D. Chueshov, An inertial manifold in a problem on nonlinear oscillations of an infinite panel. Ukrain. Mat. Zh. 42(9), 1291–1293 (1990). MR 1093647 33. I.D. Chueshov, Approximate inertial manifolds of exponential order for semilinear parabolic equations subjected to additive white noise. J. Dynam. Differ. Equ. 7(4), 549–566 (1995). MR 1362670 34. I.D. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems. University Lectures in Contemporary Mathematics (ACTA, Kharkiv, 1999). MR 1788405 35. I.D. Chueshov, Gevrey regularity of random attractors for stochastic reactiondiffusion equations. Random Oper. Stoch. Equ. 8(2), 143–162 (2000). MR 1765874 36. I. Chueshov, Order-preserving skew-product flows and nonautonomous parabolic systems. Acta Appl. Math. 65(1–3), 185–205 (2001). Special issue dedicated to Antonio Avantaggiati on the occasion of his 70th birthday. MR 1843792 37. I. Chueshov, Monotone Random Systems Theory and Applications. Lecture Notes in Mathematics, vol. 1779 (Springer, Berlin, 2002). MR 1902500 38. I. Chueshov, A reduction principle for coupled nonlinear parabolic-hyperbolic PDE. J. Evol. Equ. 4(4), 591–612 (2004). MR 2105278 39. I. Chueshov, Invariant manifolds and nonlinear master-slave synchronization in coupled systems. Appl. Anal. 86(3), 269–286 (2007). MR 2313574 40. I. Chueshov, Dynamics of Quasi-Stable Dissipative Systems. Universitext (Springer, Cham, 2015). MR 3408002 41. I. Chueshov, Synchronization in coupled second order in time infinitedimensional models. Dyn. Partial Differ. Equ. 13(1), 1–29 (2016). MR 3484078 42. I.D. Chueshov, T.V. Girya, Inertial manifolds and forms for semilinear parabolic equations subjected to additive white noise. Lett. Math. Phys. 34(1), 69–76 (1995). MR 1334036
318
References
43. I. Chueshov, S. Kolbasin, Plate models with state-dependent damping coefficient and their quasi-static limits. Nonlinear Anal. 73(6), 1626–1644 (2010). MR 2661346 44. I. Chueshov, I. Lasiecka, Inertial manifolds for von K´arm´an plate equations. Appl. Math. Optim. 46(2–3), 179–206 (2002). Special issue dedicated to the memory of Jacques-Louis Lions. MR 1944759 45. I. Chueshov, I. Lasiecka, Existence, uniqueness of weak solutions and global attractors for a class of nonlinear 2D Kirchhoff-Boussinesq models. Discrete Contin. Dyn. Syst. 15(3), 777–809 (2006). MR 2220748 46. I. Chueshov, I. Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping. Mem. Am. Math. Soc. 195(912), viii+183 (2008). MR 2438025 47. I. Chueshov, I. Lasiecka, Von Karman Evolution Equations: Well-Posedness and long-Time Dynamics. Springer Monographs in Mathematics (Springer, New York, 2010). MR 2643040 48. I. Chueshov, I. Lasiecka, On global attractor for 2D Kirchhoff-Boussinesq model with supercritical nonlinearity. Commun. Partial Differ. Equ. 36(1), 67– 99 (2011). MR 2763348 49. I. Chueshov, I. Lasiecka, Well-posedness and long time behavior in nonlinear dissipative hyperbolic-like evolutions with critical exponents, in HCDTE. Lecture Notes. Part I. Nonlinear Hyperbolic PDEs, Dispersive and Transport Equations. AIMS Series on Applied Mathematics, vol. 6 (American Institute of Mathematical Sciences, Springfield, 2013), p. 96. MR 3340991 50. I.D. Chueshov, A.M. Rekalo, Long-time dynamics of reaction-diffusion equations on thin two-layer domains, in EQUADIFF, 2003 (World Scientific Publishing Co., Inc., Hackensack, 2005), pp. 645–650. MR 2185105 51. I.D. Chueshov, M. Scheutzow, Inertial manifolds and forms for stochastically perturbed retarded semilinear parabolic equations. J. Dynam. Differ. Equ. 13(2), 355–380 (2001). MR 1829602 52. I. Chueshov, M. Scheutzow, On the structure of attractors and invariant measures for a class of monotone random systems. Dyn. Syst. 19(2), 127–144 (2004). MR 2060422 53. I. Chueshov, B. Schmalfuss, Parabolic stochastic partial differential equations with dynamical boundary conditions. Differ. Integr. Equ. 17(7–8), 751–780 (2004). MR 2074685 54. I. Chueshov, B. Schmalfuß, Master-slave synchronization and invariant manifolds for coupled stochastic systems. J. Math. Phys. 51(10), 102702, 23 (2010). MR 2761307 55. I. Chueshov, B. Schmalfuß, Stochastic dynamics in a fluid-plate interaction model with the only longitudinal deformations of the plate. Discrete Contin. Dyn. Syst. Ser. B 20(3), 833–852 (2015). MR 3331680 56. I. Chueshov, J. Duan, B. Schmalfuss, Determining functionals for random partial differential equations. Nonlinear Differ. Equ. Appl. 10(4), 431–454 (2003). MR 2016933
References
319
57. I. Chueshov, M. Scheutzow, B. Schmalfuß, Continuity Properties of Inertial Manifolds for Stochastic Retarded Semilinear Parabolic Equations. Interacting Stochastic Systems (Springer, Berlin, 2005), pp. 353–375. MR 2118582 58. I.D. Chueshov, G. Raugel, A.M. Rekalo, Interface boundary value problem for the Navier-Stokes equations in thin two-layer domains. J. Differ. Equ. 208(2), 449–493 (2005). MR 2109563 59. I. Chueshov, P.E. Kloeden, M. Yang, Synchronization in coupled stochastic sine-Gordon wave model. Discrete Contin. Dyn. Syst. Ser. B 21(9), 2969– 2990 (2016). MR 3567796 60. I.S. Ciuperca, Reaction-diffusion equations on thin domains with varying order of thinness. J. Differ. Equ. 126(2), 244–291 (1996). MR 1383978 61. B. Cockburn, D.A. Jones, E.S. Titi, Estimating the number of asymptotic degrees of freedom for nonlinear dissipative systems. Math. Comp. 66(219), 1073–1087 (1997). MR 1415799 62. P. Constantin, C. Foias, B. Nicolaenko, R. Temam, Integral manifolds and inertial manifolds for dissipative partial differential equations, in Applied Mathematical Sciences, vol. 70 (Springer, New York, 1989). MR 966192 63. H. Crauel, F. Flandoli, Attractors for random dynamical systems. Probab. Theory Related Fields 100(3), 365–393 (1994). MR 1305587 64. G. Da Prato, J. Zabczyk, Ergodicity for Infinite-Dimensional Systems. London Mathematical Society Lecture Note Series, vol. 229 (Cambridge University Press, Cambridge, 1996). MR 1417491 65. G. Da Prato, J. Zabczyk, Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and its Applications, vol. 152, 2nd edn. (Cambridge University Press, Cambridge, 2014). MR 3236753 66. R.C. Dalang, N.E. Frangos, The stochastic wave equation in two spatial dimensions. Ann. Probab. 26(1), 187–212 (1998). MR 1617046 67. J.L. Dalecki˘ı, M.G. Kre˘ın, Stability of Solutions of Differential Equations in Banach Space. Translations of Mathematical Monographs, vol. 43 (American Mathematical Society, Providence, 1974). Translated from the Russian by S. Smith. MR 0352639 68. J. Duan, K. Lu, B. Schmalfuss, Invariant manifolds for stochastic partial differential equations. Ann. Probab. 31(4), 2109–2135 (2003). MR 2016614 69. J. Duan, K. Lu, B. Schmalfuss, Smooth stable and unstable manifolds for stochastic evolutionary equations. J. Dynam. Differ. Equ. 16(4), 949–972 (2004). MR 2110052 70. Yu.V. Egorov, M.A. Shubin, Foundations of the Classical Theory of Partial Differential Equations (Springer, Berlin, 1998). Translated from the 1988 Russian original by R. Cooke. Reprint of the original English edition from the series Encyclopaedia of Mathematical Sciences [Partial Differential Equations. I. Encylopaedia of Mathematical Sciences, vol. 30 (Springer, Berlin, 1992). MR1141630 (93a:35004b)]. MR 1657445 71. K. Falconer, Fractal Geometry. Mathematical Foundations and Applications (Wiley, Chichester, 1990). MR 1102677
320
References
72. X. Fan, Random attractor for a damped sine-Gordon equation with white noise. Pac. J. Math. 216(1), 63–76 (2004). MR 2094581 73. X. Fan, Random attractors for damped stochastic wave equations with multiplicative noise. Int. J. Math. 19(4), 421–437 (2008). MR 2416723 74. X. Fan, Y. Wang, Attractors for a second order nonautonomous lattice dynamical system with nonlinear damping. Phys. Lett. A 365(1–2), 17–27 (2007). MR 2308195 75. F. Flandoli, B. Schmalfuss, Random attractors for the 3D stochastic NavierStokes equation with multiplicative white noise. Stochastics Stochastics Rep. 59(1–2), 21–45 (1996). MR 1427258 76. F. Flandoli, B. Gess, M. Scheutzow, Synchronization by noise. Probab. Theory Related Fields 168(3–4), 511–556 (2017). MR 3663624 77. F. Flandoli, B. Gess, M. Scheutzow, Synchronization by noise for orderpreserving random dynamical systems. Ann. Probab. 45(2), 1325–1350 (2017). MR 3630300 78. C. Foias, E.S. Titi, Determining nodes, finite difference schemes and inertial manifolds. Nonlinearity 4(1), 135–153 (1991). MR 1092888 79. C. Foias, G.R. Sell, R. Temam, Inertial manifolds for nonlinear evolutionary equations. J. Differ. Equ. 73(2), 309–353 (1988). MR 943945 80. L. Gawarecki, V. Mandrekar, Stochastic Differential Equations in Infinite Dimensions with Applications to Stochastic Partial Differential Equations. Probability and Its Applications (New York) (Springer, Heidelberg, 2011), pp. xvi + 291. https://doi.org/10.1007/978-3-642-16194-0 81. B. Gess, Random attractors for degenerate stochastic partial differential equations. J. Dynam. Differ. Equ. 25(1), 121–157 (2013). MR 3027636 82. T.V. Girya, I.D. Chueshov, Inertial manifolds and stationary measures for stochastically perturbed dissipative dynamical systems. Mat. Sb. 186(1), 29– 46 (1995). MR 1641664 83. N. Glatt-Holtz, J.C. Mattingly, G. Richards, On unique ergodicity in nonlinear stochastic partial differential equations. J. Stat. Phys. 166(3–4), 618–649 (2017). MR 3607584 84. M. Hairer, J.C. Mattingly, A theory of hypoellipticity and unique ergodicity for semilinear stochastic PDEs. Electron. J. Probab. 16(23), 658–738 (2011). MR 2786645 85. J.K. Hale, Asymptotic Behavior of Dissipative Systems. Mathematical Surveys and Monographs, vol. 25 (American Mathematical Society, Providence, 1988). MR 941371 86. J.K. Hale, Diffusive coupling, dissipation, and synchronization. J. Dynam. Differ. Equ. 9(1), 1–52 (1997). MR 1451743 87. J.K. Hale, G. Raugel, Reaction-diffusion equation on thin domains. J. Math. Pures Appl. (9) 71(1), 33–95 (1992). MR 1151557 88. J.K. Hale, G. Raugel, A reaction-diffusion equation on a thin l-shaped domain. Proc. R. Soc. Edinburgh Sect. A 125(2), 283–327 (1995). MR 1331562
References
321
89. J.K. Hale, X.-B. Lin, G. Raugel, Upper semicontinuity of attractors for approximations of semigroups and partial differential equations. Math. Comp. 50(181), 89–123 (1988). MR 917820 90. J.-P. Hansen, I.R. McDonalds, Theory of Simple Liquids (Academic, New York, 1986) 91. B.D. Hassard, N.D. Kazarinoff, Y.H. Wan, Theory and Applications of Hopf Bifurcation. London Mathematical Society Lecture Note Series, vol. 41 (Cambridge University Press, Cambridge/New York, 1981). MR 603442 92. D. Henry, Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840 (Springer, Berlin/New York, 1981). MR 610244 93. M.W. Hirsch, H. Smith, Monotone dynamical systems, in Handbook of Differential Equations: Ordinary Differential Equations, vol. II (Elsevier B. V., Amsterdam, 2005), pp. 239–357. MR 2182759 94. A.L. Hodgkin, A.F. Huxley, A quantitive description of membrane current and its application to conduction and exitation in nerves. J. Physiol. 117, 500–544 (1952) 95. M. Hori, Thermoelastic wave propagation in a random medium and some related problems. J. Math. Phys. 11, 953–971 (1977) 96. S. Jiang, R. Racke, Evolution Equations in Thermoelasticity. Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, vol. 112 (Chapman & Hall/CRC, Boca Raton, 2000). MR 1774100 97. K. Josi´c, Synchronization of chaotic systems and invariant manifolds. Nonlinearity 13(4), 1321–1336 (2000). MR 1767961 98. L.V. Kapitanski˘ı, I.N. Kostin, Attractors of nonlinear evolution equations and their approximations. Algebra i Analiz 2(1), 114–140 (1990). MR 1049907 99. H. Keller, B. Schmalfuss, Attractors for stochastic hyperbolic equations via transformation into random equations. Tech. report, Report Institut f¨ur Dynamische Systeme Universit¨at Bremen, 1999 100. M.A. Krasnoselrskij, Je.A. Lifshits, A.V. Sobolev, Positive Linear Systems: The Method of Positive Operators. Sigma Series in Applied Mathematics, vol. 5 (Heldermann Verlag, Berlin, 1989). Translated from the Russian by J¨urgen Appell. MR 1038527 101. H. Kunita, Stochastic Flows and Stochastic Differential Equations. Cambridge Studies in Advanced Mathematics, vol. 24 (Cambridge University Press, Cambridge, 1990). MR 1070361 102. I.S. Labouriau, H.M. Rodrigues, Synchronization of coupled equations of Hodgkin-Huxley type. Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal. 10(1–3), 463–476 (2003). Second International Conference on Dynamics of Continuous, Discrete and Impulsive Systems (London, ON, 2001). MR 1974264 103. O. Ladyzhenskaya, Attractors for Semigroups and Evolution Equations. Lezioni Lincee. [Lincei Lectures] (Cambridge University Press, Cambridge, 1991). MR 1133627
322
References
104. I. Lasiecka, R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories. I. Encyclopedia of Mathematics and Its Applications: Abstract Parabolic Systems, vol. 74 (Cambridge University Press, Cambridge, 2000). MR 1745475 105. Y. Latushkin, B. Layton, The optimal gap condition for invariant manifolds. Discrete Contin. Dynam. Syst. 5(2), 233–268 (1999). MR 1665791 106. C. Lederer, Konjugation stochastischer und zuf¨alliger station¨arer Differentialgleichungen und eine Version des lokalen Satzes von Hartman-Grobman f¨ur stochastische Differentialgleichungen, Ph.D. thesis, Humboldt Universit¨at, Berlin, 2001, in German 107. G.A. Leonov, N.V. Kuznetsov, Nonlinear Analysis of Phase-Locked Loop (PLL): Global Stability Analysis, Hidden Oscillations and Simulation Problems. Mechanics and Model-Based Control of Advanced Engineering Systems (Springer, Heidelberg, 2014), pp. 199–207. MR 3379792 108. G.A. Leonov, V.B. Smirnova, Problems of the Stability of Systems of Phase Synchronization. The Direct Method in the Theory of Stability and Its Applications (Irkutsk, 1979) (“Nauka” Sibirsk. Otdel., Novosibirsk, 1981), pp. 238– 247, 279. MR 650093 109. G.A. Leonov, V. Reitmann, V.B. Smirnova, Nonlocal Methods for PendulumLike Feedback Systems. Teubner-Texte zur Mathematik [Teubner Texts in Mathematics], vol. 132 (B. G. Teubner Verlagsgesellschaft mbH, Stuttgart, 1992). MR 1216519 110. A.W. Leung, Asymptotically stable invariant manifold for coupled nonlinear parabolic-hyperbolic partial differential equations. J. Differ. Equ. 187(1), 184– 200 (2003). MR 1946549 111. J.-L. Lions, E. Magenes, Non-Homogeneous Boundary Value Problems and Applications. Vol. I (Springer, New York/Heidelberg, 1972). Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 181. MR 0350177 112. J. Mallet-Paret, G.R. Sell, Inertial manifolds for reaction diffusion equations in higher space dimensions. J. Am. Math. Soc. 1(4), 805–866 (1988). MR 943276 113. M. Miklavˇciˇc, A sharp condition for existence of an inertial manifold. J. Dynam. Differ. Equ. 3(3), 437–456 (1991). MR 1118343 114. A. Millet, P.-L. Morien, On a nonlinear stochastic wave equation in the plane: existence and uniqueness of the solution. Ann. Appl. Probab. 11(3), 922–951 (2001). MR 1865028 115. A. Millet, M. Sanz-Sol´e, Approximation and support theorem for a wave equation in two space dimensions. Bernoulli 6(5), 887–915 (2000). MR 1791907 116. Yu.A. Mitropolsky, O.B. Lykova, Integral Manifolds in Nonlinear Mechanics. Nonlinear Analysis and Its Applications Series (Izdat. “Nauka”, Moscow, 1973). MR 0364771 117. X. Mora, Finite-dimensional attracting invariant manifolds for damped semilinear wave equations, in Contributions to Nonlinear Partial Differential Equations, Vol. II (Paris, 1985). Pitman Research Notes in Mathematics Series, vol. 155 (Longman Scientific & Technical, Harlow, 1987), pp. 172–183. MR 907731
References
323
118. E. Mosekilde, Y. Maistrenko, D. Postnov, Chaotic Synchronization: Applications to Living Systems. World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises, vol. 42 (World Scientific Publishing Co., Inc., River Edge, 2002). MR 1939912 119. J.E.M. Rivera, R.K. Barreto, Existence and exponential decay in nonlinear thermoelasticity. Nonlinear Anal. 31(1–2), 149–162 (1998). MR 1487536 120. J.E.M. Rivera, R. Racke, Smoothing properties, decay, and global existence of solutions to nonlinear coupled systems of thermoelastic type. SIAM J. Math. Anal. 26(6), 1547–1563 (1995). MR 1356459 121. O. Naboka, Synchronization of nonlinear oscillations of two coupling Berger plates. Nonlinear Anal. 67(4), 1015–1026 (2007). MR 2325358 122. O. Naboka, Synchronization phenomena in the system consisting of m coupled Berger plates. J. Math. Anal. Appl. 341(2), 1107–1124 (2008). MR 2398273 123. O. Naboka, On synchronization of oscillations of two coupled Berger plates with nonlinear interior damping. Commun. Pure Appl. Anal. 8(6), 1933–1956 (2009). MR 2552158 124. G. Ochs, Random attractors: robustness, numerics and chaotic dynamics, in Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems (Springer, Berlin, 2001), pp. 1–30. MR 1850299 125. B. Øksendal, Stochastic Differential Equations: An Introduction with Applications. Universitext, 6th edn. (Springer, Berlin, 2003). MR 2001996 126. G.V. Osipov, J. Kurths, C. Zhou, Synchronization in Oscillatory Networks. Springer Series in Synergetics (Springer, Berlin, 2007). MR 2350638 127. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations. Applied Mathematical Sciences, vol. 44 (Springer, New York, 1983). MR 710486 128. L.M. Pecora, T.L. Carroll, Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821–824 (1990). MR 1038263 129. L.M. Pecora, T.L. Carroll, G.A. Johnson, D.J. Mar, J.F. Heagy, Fundamentals of synchronization in chaotic systems, concepts, and applications. Chaos 7(4), 520–543 (1997). MR 1604666 130. A. Pikovsky, M. Rosenblum, J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge Nonlinear Science Series, vol. 12 (Cambridge University Press, Cambridge, 2001). MR 1869044 131. C. Pr´evˆot, M. R¨ockner, A Concise Course on Stochastic Partial Differential Equations. Lecture Notes in Mathematics, vol. 1905 (Springer, Berlin, 2007). MR 2329435 132. M. Prizzi, K.P. Rybakowski, Some recent results on thin domain problems. Topol. Methods Nonlinear Anal. 14(2), 239–255 (1999). MR 1766188 133. M. Qian, S. Zhu, W.-X. Qin, Dynamics in a chain of overdamped pendula driven by constant torques. SIAM J. Appl. Math. 57(1), 294–305 (1997). MR 1429386 134. L. Quer-Sardanyons, M. Sanz-Sol´e, A stochastic wave equation in dimension 3: smoothness of the law. Bernoulli 10(1), 165–186 (2004). MR 2044597
324
References
135. G. Raugel, Global attractors in partial differential equations, in Handbook of Dynamical Systems, vol. 2 (North-Holland, Amsterdam, 2002), pp. 885–982. MR 1901068 136. A.M. Rekalo, Asymptotic behavior of solutions of nonlinear parabolic equations on two-layer thin domains. Nonlinear Anal. 52(5), 1393–1410 (2003). MR 1942568 137. A.M. Rekalo, I.D. Chueshov, The global attractor of a contact parabolic problem with a thin two-layer domain. Mat. Sb. 195(1), 103–128 (2004). MR 2058379 138. J.C. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors. Cambridge Texts in Applied Mathematics (Cambridge University Press, Cambridge, 2001). MR 1881888 139. J.C. Robinson, Stability of random attractors under perturbation and approximation. J. Differ. Equ. 186(2), 652–669 (2002). MR 1942226 140. H.M. Rodrigues, Abstract methods for synchronization and applications. Appl. Anal. 62(3–4), 263–296 (1996). MR 1623511 141. H.M. Rodrigues, L.F.C. Alberto, N.G. Bretas, Uniform invariance principle and synchronization. Robustness with respect to parameter variation. J. Differ. Equ. 169(1), 228–254 (2001). Special issue in celebration of Jack K. Hale’s 70th birthday, Part 3 (Atlanta, GA/Lisbon, 1998). MR 1808466 142. A.V. Romanov, Sharp estimates for the dimension of inertial manifolds for nonlinear parabolic equations. Izv. Ross. Akad. Nauk Ser. Mat. 57(4), 36–54 (1993). MR 1243350 143. T.C. Rosati, Synchronization for KPZ (2019) 144. B. Schmalfuß, Backward cocycles and attractors of stochastic differential equations, in International Seminar on Applied Mathematics–Nonlinear Dynamics: Attractor Approximation and Global Behaviour, ed. by V. Reitmann, T. Riedrich, N. Koksch (1992), pp. 185–192 145. B. Schmalfuß, The random attractor of the stochastic Lorenz system. Z. Angew. Math. Phys. 48(6), 951–975 (1997). MR 1488689 146. B. Schmalfuß, Measure attractors and random attractors for stochastic partial differential equations. Stoch. Anal. Appl. 17(6), 1075–1101 (1999). MR 1721934 147. B. Schmalfuss, Attractors for the non-autonomous dynamical systems, in International Conference on Differential Equations, vols. 1, 2 (Berlin, 1999) (World Scientific Publishing Co., Inc., River Edge, 2000), pp. 684–689. MR 1870217 148. B. Schmalfuss, Inertial manifolds for random differential equations, in Probability and Partial Differential Equations in Modern Applied Mathematics. IMA Volumes in Mathematics and its Applications, vol. 140 (Springer, New York, 2005), pp. 213–236. MR 2202042 149. G.R. Sell, Y. You, Dynamics of Evolutionary Equations. Applied Mathematical Sciences, vol. 143 (Springer, New York, 2002). MR 1873467
References
325
150. Z. Shen, S. Zhou, X. Han, Synchronization of coupled stochastic systems with multiplicative noise. Stoch. Dyn. 10(3), 407–428 (2010). MR 2671384 151. Z. Shen, S. Zhou, W. Shen, One-dimensional random attractor and rotation number of the stochastic damped sine-Gordon equation. J. Differ. Equ. 248(6), 1432–1457 (2010). MR 2593049 152. W. Shen, Z. Shen, S. Zhou, Asymptotic dynamics of a class of coupled oscillators driven by white noises. Stoch. Dyn. 13(4), 1350002, 23 (2013). MR 3116923 153. R.E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations. Mathematical Surveys and Monographs, vol. 49 (American Mathematical Society, Providence, 1997). MR 1422252 154. J. Simon, Compact sets in the space L p (0, T ; B). Ann. Mat. Pura Appl. (4) 146, 65–96 (1987). MR 916688 155. A.V. Skorohod, Random linear operators, in Mathematics and Its Applications (Soviet Series) (D. Reidel Publishing Co., Dordrecht, 1984). Translated from the Russian. MR 733994 156. J. Smoller, Shock Waves and Reaction-Diffusion Equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], vol. 258 (Springer, New York/Berlin, 1983). MR 688146 157. K. Sobczyk, Stochastic Wave Propagation (PWN—Polish Scientific Publishers, Warsaw; Elsevier Science Publishers, B.V., Amsterdam, 1984). Translated from the Polish by the author, I. Bychowska, Z. Adamowicz. MR 957506 158. H.B. Stewart, Generation of analytic semigroups by strongly elliptic operators under general boundary conditions. Trans. Am. Math. Soc. 259(1), 299–310 (1980). MR 561838 159. S. Strogatz, Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life (Hyperion Books, New York, 2003). MR 2394754 160. J. Sun, E.M. Bollt, T. Nishikawa, Constructing generalized synchronization manifolds by manifold equation. SIAM J. Appl. Dyn. Syst. 8(1), 202–221 (2009). MR 2481282 161. R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, 2nd edn. Applied Mathematical Sciences, vol. 68 (Springer, New York, 1997). MR 1441312 162. R. Temam, Navier-Stokes Equations. Theory and Numerical Analysis (AMS Chelsea Publishing, Providence, 2001). Reprint of the 1984 edition. MR 1846644 163. C. Tresser, P.A. Worfolk, H. Bass, Master-slave synchronization from the point of view of global dynamics. Chaos 5(4), 693–699 (1995). MR 1366728 164. H.J.V. Tyrrell, Diffusion in Liquids: A Theoretical and Experimental Study (Butterworths, London, 1984) 165. V.K. Vanag, I.R. Epstein, Localized patterns in reaction-diffusion systems. Chaos 17(3), 037110 (2007) MR 2356983 166. M.J. Vishik, A.V. Fursikov, Mathematical Problems of Statistical Hydromechanics. Mathematics and Its Applications (Soviet Series), vol. 9 (Kluwer Aca-
326
References
demic Publishers Group, Dordrecht, 1988). Translated from the 1980 Russian original [ MR0591678] by D.A. Leites. MR 3444271 167. J. Wloka, Partielle Differentialgleichungen (B.G. Teubner, Stuttgart, 1982). Sobolevr¨aume und Randwertaufgaben. [Sobolev spaces and boundary value problems], Mathematische Leitf¨aden. [Mathematical Textbooks]. MR 652934 168. C.W. Wu, Synchronization in Coupled Chaotic Circuits and Systems. World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises, vol. 41 (World Scientific Publishing Co., Inc., River Edge, 2002). MR 1891843 169. E. Zeidler, Nonlinear Functional Analysis and Its Applications. I: Fixed-Point Theorems (Springer, New York, 1986). Translated from the German by Peter R. Wadsack. MR 816732
Index
A asymptotic synchronization, 20 attractor global, 7 pullback, 189 upper limit, 11 upper semicontinuity, 11 C Cauchy problem abstract parabolic conditional compactness, 20 energy balance, 19 local well-posedness, 18 mild solution, 18 weak solution, 19 abstract parabolic delay, 40 abstract second order in time, 85 generalized solution, 87 strong solution, 87 well-posedness, 87 compact seminorm, 12 completeness defect, 53 complete replacement synchronization parabolic case, 30 drive system, 30 response system, 30 synchronizing coordinate, 30 second order in time models, 105, 109 cone, 212 normal, 212 solid, 212 cone invariance property, 145
coupled hyperbolic system, 137 master–slave synchronization, 139 stochastic perturbation, 290 master–slave synchronization, 291 coupled parabolic–hyperbolic system, 133, 150 drive-response synchronization, 136 master–slave synchronization, 135, 154 stochastic version, 287 master–slave synchronization, 289 thermoelasticity, 154 with singular terms, 154 master–slave synchronization, 165 reduction principle, 174 coupled parabolic problem, 16 asymptotic synchronization, 20 complete replacement synchronization, 30 drive-response synchronization, 30 exponential synchronization, 21 finite-dimensional coupling, 51 global attractor, 27 regular semicontinuity, 29 stochastic perturbation, 215 synchronization of global attractors, 32 well-posedness, 25 coupled PDE/ODE systems, 136 drive-response synchronization, 137 master–slave synchronization, 137 coupled second-order problem, 85 asymptotic synchronization, 98 finite-dimensional coupling, 107 global attractor, 88 quasi-stability, 93
© Springer Nature Switzerland AG 2020 I. Chueshov, B. Schmalfuß, Synchronization in Infinite-Dimensional Deterministic and Stochastic Systems, Applied Mathematical Sciences 204, https://doi.org/10.1007/978-3-030-47091-3
327
328 coupled second order problem synchronization of global attractors, 102 coupled sine-Gordon equations, 111 anti-phase synchronization, 114 quasi-stationary, 57 stochastic, 245 synchronization, 261 synchronization, 112 coupling matrix, 24 D dichotomy linear, 119 nonlinear, 141 drive-response synchronization parabolic case, 30 thermoelasticity, 164 dynamical system, 5 asymptotically compact, 7 asymptotically quasi-stable, 14 asymptotically smooth, 7 compact, 6 conditionally compact, 6 dissipative, 6 gradient, 9 phase space, 5 point dissipative, 6 quasi-stable, 12 dimension of global attractor, 14 global attractor, 13 random, 184 E evolution operator asymptotically compact, 7 asymptotically smooth, 7 compact, 6 conditionally compact, 6 dissipative, 6 point dissipative, 6 exponential synchronization, 21 F filtration, 187 fractal dimension, 13 Frech´et derivative, 24 full trajectory, 5 G global attractor, 7 gradient system, 9 H Hale condition, 7 Hodgkin–Huxley model, 63
Index I inertial form, 132, 143, 173 inertial manifold, 116, 176 invariant manifold Hadamard method, 141 cone invariance, 145 Lyapunov–Perron method, 118 main integral equation, 125 tracking property, 128 stochastic Lyapunov–Perron method, 275 K Klein–Gordon–Schr¨odinger system, 140 drive-response synchronization, 141 master–slave synchronization, 141 stochastic perturbation, 291 master–slave synchronization, 292 L Ladyzhenskaya condition, 7 Lagrange interpolation operator, 53 linear coupling, 23 synchronization of global attractors, 32 Lyapunov–Perron method, 118 stochastic version, 275 Lyapunov function, 9 strict, 9 M master–slave synchronization, 115 partial, 116 quasilinear case, 141 random systems, 269 semilinear case, 118 master system, 115 multiplicative ergodic theorem, 297 O operator evolution, 5 with discrete spectrum, 23 Oseledets space, 297 Oseledets theorem, 297 P perfection, 201 plate models, 109 Berger, 110 Kirchhoff, 109 stochastic, 266 synchronization, 109 von Karman, 110 pullback attractor, 190
Index Q quasi-stability, 12 asymptotic, 14 dimension of global attractor, 14 global attractor, 13 stochastic, 257 R radius of dissipativity, 6 random attractor, 190 random dynamical system (RDS), 184 invariant measure, 195 monotone, 212 order-preserving, 212 pullback attractor, 189 random fixed point, 195 random pullback attractor, 190 random set closed random set, 189 forward invariant set, 191 invariant set, 191 pullback absorbing set, 191 pullback attracting set, 191 tempered random set, 190 reaction–diffusion systems coupling inside domain, 55 synchronization, 57 coupling on boundary, 68 synchronization, 71 cross-diffusion, 60 thin domain, 71 stochastic version, 225 two-layer problem, 71 stochastic version, 225 synchronization, 81 S semigroup, 5 seminorm compact, 12 semiorbit, see semitrajectory semitrajectory, 5 set ω -limit, 5 absorbing, 5 backward invariant, 5 forward invariant, 5 invariant, 5 negatively invariant, 5
329 positively invariant, 5 random admissible, 212 set of stationary points, 8 slave system, 115 stationary point, 10, 195 stochastic convolution, 198, 204 synchronization anti-phase, 114 asymptotic, 20 complete replacement, 30, 117 drive-response, 30, 117, 121 coupled PDE/ODE systems, 137 Klein–Gordon–Schr¨odinger, 141 parabolic–hyperbolic system, 136 thermoelasticity, 164 elastic/wave structures, 84 exponential, 21 finite-dimensional coupling, 108 stochastic, 265 higher modes, 175 master–slave, 115 coupled hyperbolic system, 139 coupled PDE/ODE systems, 137 Klein–Gordon–Schr¨odinger, 141 parabolic–hyperbolic system, 135, 154, 165 quasilinear case, 141 random systems, 269 semilinear case, 118 of global attractors, 32 plate models, 109 reaction–diffusion system, 57 sine-Gordon equations, 112 stochastic, 261 two-layer problem, 81 stochastic, 233 wave models, 111 T tail of trajectory, 5 tempered random variable, 186 U unstable manifold, 8 W wave equations, 111 stochastic, 245