238 93 3MB
English Pages [376] Year 2021
Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
Many partial differential equations (PDEs) arising in physics, such as the nonlinear wave equation and the Schrödinger equation, can be viewed as infinite-dimensional Hamiltonian systems. In the last thirty years, several existence results of time quasiperiodic solutions have been proved adopting a “dynamical systems” point of view. Most of them deal with equations in one space dimension, whereas for multidimensional PDEs a satisfactory picture is still under construction. An updated introduction to the now rich subject of KAM theory for PDEs is provided in the first part of this research monograph. The focus then moves to the nonlinear wave equation, endowed with periodic boundary conditions. The main result of the monograph proves the bifurcation of small amplitude finite-dimensional invariant tori for this equation, in any space dimension. This is a difficult small divisor problem due to complex resonance phenomena between the normal mode frequencies of oscillations. The proof requires various mathematical methods, ranging from Nash–Moser and KAM theory to reduction techniques in Hamiltonian dynamics and multiscale analysis for quasi-periodic linear operators, which are presented in a systematic and self-contained way. Some of the techniques introduced in this monograph have deep connections with those used in Anderson localization theory. This book will be useful to researchers who are interested in small divisor problems, particularly in the setting of Hamiltonian PDEs, and who wish to get acquainted with recent developments in the field.
ISBN 978-3-03719-211-5
https://ems.press
Monographs / Berti/Bolle | Font: Rotis Semi Sans | Farben: Pantone 116, Pantone 287 | RB 22 mm
Massimiliano Berti and Philippe Bolle Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
Massimiliano Berti Philippe Bolle
Monographs in Mathematics
Massimiliano Berti Philippe Bolle
Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
EMS Monographs in Mathematics Edited by Hugo Duminil-Copin (Institut des Hautes Études Scientifiques (IHÉS), Bures-sur-Yvette, France) Gerard van der Geer (University of Amsterdam, The Netherlands) Thomas Kappeler (University of Zürich, Switzerland) Paul Seidel (Massachusetts Institute of Technology, Cambridge, USA)
EMS Monographs in Mathematics is a book series aimed at mathematicians and scientists. It publishes research monographs and graduate level textbooks from all fields of mathematics. The individual volumes are intended to give a reasonably comprehensive and selfcontained account of their particular subject. They present mathematical results that are new or have not been accessible previously in the literature.
Previously published in this series: Richard Arratia, A.D. Barbour and Simon Tavaré, Logarithmic Combinatorial Structures: A Probabilistic Approach Demetrios Christodoulou, The Formation of Shocks in 3-Dimensional Fluids Sergei Buyalo and Viktor Schroeder, Elements of Asymptotic Geometry Demetrios Christodoulou, The Formation of Black Holes in General Relativity Joachim Krieger and Wilhelm Schlag, Concentration Compactness for Critical Wave Maps Jean-Pierre Bourguignon, Oussama Hijazi, Jean-Louis Milhorat, Andrei Moroianu and Sergiu Moroianu, A Spinorial Approach to Riemannian and Conformal Geometry Kazuhiro Fujiwara and Fumiharu Kato, Foundations of Rigid Geometry I Demetrios Christodoulou, The Shock Development Problem Diogo Arsénio and Laure Saint-Raymond, From the Vlasov–Maxwell–Boltzmann System to Incompressible Viscous Electro-magneto-hydrodynamics, Vol. 1 Paul Balmer and Ivo Dell’Ambrogio, Mackey-2 Functors and Mackey-2 Motives
Massimiliano Berti Philippe Bolle
Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
Authors: Massimiliano Berti SISSA Via Bonomea 265 34136 Trieste Italy
Philippe Bolle Laboratoire de Mathématiques d’Avignon, EA2151 Avignon Université 84140 Avignon France
e-mail: [email protected]
e-mail: [email protected]
2020 Mathematics Subject Classification: Primary: 37K55, 37K50, 35L05; secondary: 35Q55 Key words: Infinite-dimensional Hamiltonian systems, nonlinear wave equation, KAM for PDEs, quasi-periodic solutions and invariant tori, small divisors, Nash–Moser theory, multiscale analysis
ISBN 978-3-03719-211-5 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. Published by EMS Press, an imprint of the European Mathematical Society – EMS – Publishing House GmbH Institut für Mathematik Technische Universität Berlin Straße des 17. Juni 136 10623 Berlin Germany https://ems.press
© 2020 European Mathematical Society - EMS - Publishing House GmbH Typeset using the authors’ TEX files: le-tex publishing services GmbH, Leipzig, Germany Printing and binding: Beltz Bad Langensalza GmbH, Bad Langensalza, Germany ∞ Printed on acid free paper 987654321
Preface Many partial differential equations (PDEs) arising in physics can be seen as infinitedimensional Hamiltonian systems @t z D J.rz H /.z/;
z 2E;
(0.0.1)
where the Hamiltonian function H W E ! R is defined on an infinite-dimensional Hilbert space E of functions z WD z.x/, and J is a nondegenerate antisymmetric operator. Some main examples are the nonlinear wave (NLW) equation ut t u C V .x/u C g.x; u/ D 0 ;
(0.0.2)
the nonlinear Schr¨odinger (NLS) equation, the beam equation, the higher-dimensional membrane equation, the water-waves equations, i.e., the Euler equations of hydrodynamics describing the evolution of an incompressible irrotational fluid under the action of gravity and surface tension, as well as its approximate models like the Korteweg de Vries (KdV), Boussinesq, Benjamin–Ono, and Kadomtsev–Petviashvili (KP) equations, among many others. We refer to [102] for a general introduction to Hamiltonian PDEs. In this monograph we shall adopt a “dynamical systems” point of view regarding the NLW equation (0.0.2) equipped with periodic boundary conditions x 2 Td WD .R=2Z/d as an infinite-dimensional Hamiltonian system, and we shall prove the existence of Cantor families of finite-dimensional invariant tori filled by quasiperiodic solutions of (0.0.2). The first results in this direction were obtained by Bourgain [42]. The search for invariant sets for the flow is an essential change of paradigm in the study of hyperbolic equations with respect to the more traditional pursuit of the initial value problem. This perspective has allowed the discovery of many new results, inspired by finite-dimensional Hamiltonian systems, for Hamiltonian PDEs. When the space variable x belongs to a compact manifold, say x 2 Œ0; with Dirichlet boundary conditions or x 2 Td (periodic boundary conditions), the dynamics of a Hamiltonian PDE (0.0.1), like (0.0.2), is expected to have a “recurrent” behavior in time, with many periodic and quasiperiodic solutions, i.e., solutions (defined for all times) of the form u.t/ D U.!t/ 2 E
where T 3 ' 7! U.'/ 2 E
(0.0.3)
is continuous, 2-periodic in the angular variables ' WD .'1 ; : : : ; ' / and the frequency vector ! 2 R is nonresonant, namely ! ` ¤ 0, 8` 2 Z n f0g. When D 1, the solution u.t/ is periodic in time, with period 2=!. If U.!t/ is a quasiperiodic solution then, since the orbit f!tgt 2R is dense on T , the torus manifold U.T / E is invariant under the flow of (0.0.1).
vi Note that the linear wave equation (0.0.2) with g D 0, ut t u C V .x/u D 0;
x 2 Td ;
(0.0.4)
possesses many quasiperiodic solutions. Indeed the self-adjoint operator C V .x/ has a complete L2 orthonormal basis of eigenfunctions ‰j .x/, j 2 N, with eigenvalues j ! C1, . C V .x// ‰j .x/ D j ‰j .x/;
j 2 N:
(0.0.5)
Suppose for simplicity that C V .x/ > 0, so that the eigenvalues j D 2j , j > 0, are positive, and all the solutions of (0.0.4) are X ˛j cos.j t C j /‰j .x/; ˛j ; j 2 R ; (0.0.6) j 2N
which, depending on the resonance properties of the linear frequencies j D j .V /, are periodic, quasiperiodic, or almost-periodic in time (i.e., quasiperiodic with infinitely many frequencies). What happens to these solutions under the effect of the nonlinearity g.x; u/ ? There exist special nonlinear equations for which all the solutions are still periodic, quasiperiodic, or almost-periodic in time, for example the KdV, Benjamin–Ono, and 1-dimensional defocusing cubic NLS equations. These are completely integrable PDEs. However, for generic nonlinearities, one expects, in analogy with the celebrated Poincar´e nonexistence theorem of prime integrals for nearly integrable Hamiltonian systems, that this is not the case. On the other hand, for sufficiently small Hamiltonian perturbations of a nondegenerate integrable system in Tn Rn , the classical Kolmogorov–Arnold–Moser (KAM) theorem proves the persistence of quasiperiodic solutions with Diophantine frequency vectors ! 2 Rn , i.e., vectors satisfying for some > 0 and n 1, the nonresonance condition j! `j ; 8` 2 Zn n f0g : (0.0.7) j`j Such frequencies form a Cantor set of Rn of positive measure if > n 1. These quasiperiodic solutions (which densely fill invariant Lagrangian tori) were constructed by Kolmogorov [99] and Arnold [2] for analytic systems using an iterative Newton scheme, and by Moser [105–107] for differentiable perturbations by introducing smoothing operators. This scheme then gave rise to abstract Nash–Moser implicit function theorems like the ones proved by Zehnder [128, 129] (see also [109] and Section 2.1). What happens for infinite-dimensional systems like PDEs? The central question of KAM theory for PDEs is: do “most” of the periodic, quasiperiodic, or almost-periodic solutions of an integrable PDE (linear or nonlinear) persist, just slightly deformed, under the effect of a nonlinear perturbation?
vii KAM theory for PDEs started a bit more than thirty years ago with the pioneering works of Kuksin [100] and Wayne [126] on the existence of quasiperiodic solutions for semilinear perturbations of 1-dimensional linear wave and Schr¨odinger equations in the interval Œ0; . These results are based on an extension of the KAM perturbative approach developed for the search for lower-dimensional tori in finite-dimensional systems (see [56, 107, 111]) and relies on the verification of the so-called secondorder Melnikov nonresonance conditions. Nowadays KAM theory for 1-dimensional PDEs has reached a satisfactory level of comprehension concerning quasiperiodic solutions, while questions concerning almost-periodic solutions remain quite open. The known results include bifurcation of small-amplitude solutions [17, 103, 113], perturbations of large finite gap solutions [29, 34, 97, 101, 102], extension to periodic boundary conditions [36, 48, 54, 71], use of weak nondegeneracy conditions [12], nonlinearities with derivatives [19, 93, 130] up to quasilinear ones [6–8,65], including water-waves equations [5,32], applications to quantum harmonic oscillators [10, 11, 82], and a few examples of almost-periodic solutions [43, 114]. We describe these developments in more detail in Chapter 2. Also, KAM theory for multidimensional PDEs still contains few results and a satisfactory picture is under construction. If the space dimension d is two or more, major difficulties are: 1. The eigenvalues 2j of the Schr¨odinger operator C V .x/ in (0.0.5) appear in huge clusters of increasing size. For example, if V .x/ D 0, and x 2 Td , they are jkj2 D k12 C C kd2 ;
k D .k1 ; : : : ; kd / 2 Zd :
2. The eigenfunctions ‰j .x/ may be “not localized” with respect to the exponentials. Roughly speaking, this means that there is no one-to-one correspondence h W N ! Zd such that the entries .‰j ; eikx /L2 of the change-of-basis matrix (between .‰j /j 2N and .eikx /k2Zd ) decay rapidly to zero as jk h.j /j ! 1. The first existence result of time-periodic solutions for the NLW equation ut t u C mu D u3 C h:o:t:;
x 2 Td ; d 2 ;
was proved by Bourgain in [37], by extending the Craig–Wayne approach [54], originally developed if x 2 T. Further existence of results of periodic solutions have been proved by Berti and Bolle [22] for merely differentiable nonlinearities, Berti, Bolle, and Procesi [26] for Zoll manifolds, Gentile and Procesi [76] using Lindstedt series techniques, and Delort [55] for NLS equations using paradifferential calculus. The first breakthrough result about the existence of quasiperiodic solutions for space multidimensional PDEs was proved by Bourgain [39] for analytic Hamiltonian NLS equations of the form iut D u C M u C "@uN H.u; u/ N
(0.0.8)
with x 2 T2 , where M D Op.k / is a Fourier multiplier supported on finitely many sites E Z2 , i.e., k D 0, 8k 2 Z2 n E. The k , k 2 E, play the role of
viii external parameters used to verify suitable nonresonance conditions. Note that the eigenfunctions of C M are the exponentials eikx and so the above-mentioned problem 2 is not present. Later, using subharmonic analysis tools previously developed for quasiperiodic Anderson localization theory by Bourgain, Goldstein, and Schlag [44], Bourgain was able in [41, 42] to extend this result in any space dimension d , and also for NLW equations of the form ut t u C M u C "F 0 .u/ D 0;
x 2 Td ;
(0.0.9)
where F .u/ is a polynomial in u. Here F 0 .u/ denotes the derivative of F . We also mention the existence results of quasiperiodic solutions by Bourgain and Wang [45, 46] for NLS and NLW equations under a random perturbation. The stochastic case is a priori easier than the deterministic one because it is simpler to verify the nonresonance conditions with a random variable. Quasiperiodic solutions u.t; x/ D U.!t; x/ of (0.0.9) with a frequency vector ! 2 R , namely solutions U.'; x/, ' 2 T , of .! @' /2 U U C M U C "F 0 .U / D 0 ;
(0.0.10)
are constructed by a Newton scheme. The main analysis concerns finite-dimensional restrictions of the quasiperiodic operators obtained by linearizing (0.0.10) at each step of the Newton iteration, (0.0.11) ˘N .! @' /2 C M C "b.'; x/ jH ; N
where b.'; x/ D F 00 .U.'; x// and ˘N denotes the projection on the finitedimensional subspace ( ) X i.`'Ckx/ d HN WD h D h`;k e ;` 2 Z ;k 2 Z : j.`;k/jN
The matrix that represents (0.0.11) in the exponential basis is a perturbation of the b ``0 ;kk 0 / diagonal matrix Diag..! `/2 C jkj2 C k / with off-diagonal entries ".b that decay exponentially to zero as j.` `0 ; k k 0 /j ! C1, assuming that b is analytic (or subexponentially, if b is Gevrey). The goal is to prove that such a matrix is invertible for most values of the external parameters, and that its inverse has an exponential (or Gevrey) off-diagonal decay. It is not difficult to impose lower bounds for the eigenvalues of the self-adjoint operator (0.0.11) for most values of the parameters. These first-order Melnikov nonresonance conditions are essentially the minimal assumptions for proving the persistence of quasiperiodic solutions of (0.0.9), and provide estimates of the inverse of the operator (0.0.11) in L2 -norm. In order to prove fast off-diagonal decay estimates for the inverse matrix, Bourgain’s technique is
ix a “multiscale” inductive analysis based on the repeated use of the “resolvent identity.” An essential ingredient is that the “singular” sites ˇ ˇ .`; k/ 2 Z Zd such that ˇ.! `/2 C jkj2 C k ˇ 1 (0.0.12) are separated into clusters that are sufficiently distant from one another. However, the information (0.0.12) about just the linear frequencies of (0.0.9) is not sufficient (unlike for time-periodic solutions [37]) in order to prove that the inverse matrix has an exponential (or Gevrey) off-diagonal decay. Also, finer nonresonance conditions at each scale along the induction need to be verified. We describe the multiscale approach in Section 2.4 and we prove novel multiscale results in Chapter 5. These techniques have been extended in the recent work of Wang [125] for the nonlinear Klein–Gordon equation ut t u C u C upC1 D 0;
p 2 N; x 2 Td ;
that, unlike (0.0.9), is parameter independent. A key step is to verify that suitable nonresonance conditions are fulfilled for most “initial data.” We refer to [124] for a corresponding result for the NLS equation. Another stream of important results for multidimensional PDEs were inaugurated in the breakthrough paper by Eliasson and Kuksin [61] for the NLS equation (0.0.8). In this paper the authors are able to block diagonalize, and reduce to constant coefficients, the quasiperiodic Hamiltonian operator obtained at each step of the iteration. This KAM reducibility approach extends the perturbative theory developed for 1dimensional PDEs, by verifying the so-called second-order Melnikov nonresonance conditions. It allows one to also prove directly the linear stability of the quasiperiodic solutions. Other results in this direction have been proved for the 2-dimensionalcubic NLS equation by Geng, Xu, and You [75], in any space dimension and arbitrary polynomial nonlinearities by Procesi and Procesi [117, 118], and for beam equations by Geng and You [72] and Eliasson, Gr´ebert, and Kuksin [58]. Unfortunately, the second-order Melnikov conditions are violated for NLW equations for which an analogous reducibility result does not hold. We describe the KAM reducibility approach with PDE applications in Sections 2.2 and 2.3. We now present more in detail the goal of this research monograph. The main result is the existence of small-amplitude time-quasiperiodic solutions for autonomous NLW equations of the form ut t u C V .x/u C g.x; u/ D 0 ; x 2 Td ;
g.x; u/ D a.x/u3 C O.u4 / ;
(0.0.13)
in any space dimension d 1, where V .x/ is a smooth multiplicative potential such that C V .x/ > 0, and the nonlinearity is C 1 . Given a finite set S N (tangential sites), we construct quasiperiodic solutions u.!t; x/ with frequency vector .!j /j 2S , of the form X u.!t; x/ D ˛j cos.!j t/‰j .x/ C r.!t; x/; !j D j C O.j˛j/ ; (0.0.14) j 2S
x where the remainder r.'; x/ is o.j˛j/-small in some Sobolev space; here ˛ WD .˛j /j 2S . The solutions (0.0.14) are thus a small deformation of linear solutions (0.0.6), supported on the “tangential” space spanned by the eigenfunctions .‰j .x//j 2S, with a much smaller component in the normal subspace. These quasiperiodic solutions of (0.0.13) exist for generic potentials V .x/, coefficients a.x/, and “most” small values of the amplitudes .˛j /j 2S . The precise statement is given in Theorems 1.2.1 and 1.2.3. The proof of this result requires various mathematical methods that this book aims to present in a systematic and self-contained way. A complete outline of the steps of the proof is presented in Section 2.5. Here we just mention that we shall use a Nash– Moser iterative scheme in scales of Sobolev spaces for the search for an invariant torus embedding supporting quasiperiodic solutions, with a frequency vector ! to be determined. One key step is to establish the existence of an approximate inverse for the operators obtained by linearizing the NLW equation at any approximate quasiperiodic solution u.!t; x/, and to prove that such an approximate inverse satisfies tame estimates in Sobolev spaces, with loss of derivatives due to the small divisors. These linearized operators have the form h 7! .! @' /2 h h C V .x/h C .@u g/.x; u.!t; x//h with coefficients depending on x 2 Td and ' 2 TjSj . The construction of an approximate inverse requires several steps. After writing the wave equation as a Hamiltonian system in infinite dimensions, the first step is to use a symplectic change of variables to approximately decouple the tangential and normal components of the linearized operator. This is a rather general procedure for autonomous Hamiltonian PDEs, which reduces the problem to the search for an approximate inverse for a quasiperiodic Hamiltonian linear operator acting in the subspace normal to the torus (see Chapter 7 and Appendix C). In order to avoid the difficulty posed by the violation of the second-order Melnikov nonresonance conditions required by a KAM reducibility scheme, we develop a multiscale inductive approach a` la Bourgain, which is particularly delicate since the eigenfunctions ‰j .x/ of C V .x/ defined in (0.0.5) are not localized near the exponentials. In particular, the matrix elements .‰j ; a.x/‰j 0 /L2 representing the multiplication operator with respect to the basis of the eigenfunctions ‰j .x/ do not decay, in general, as j j 0 ! 1. In Chapter 5 we provide the complete proof of the multiscale proposition (which is fully self-contained together with Appendix B), which we shall use in Chapters 10 and 11. These results extend the multiscale analysis developed for forced NLW and NLS equations in [23] and [24]. The presence of a multiplicative potential V .x/ in (0.0.13) also makes it difficult to control the variations of the tangential and normal frequencies due to the effect of the nonlinearity a.x/u3 C O.u4 / with respect to parameters. In this monograph, after a careful bifurcation analysis of the quasiperiodic solutions, we are able to use just the length j!j of the frequency vector as an internal parameter to verify all the nonresonance conditions along the iteration. The frequency vector is constrained to a fixed direction, see (1.2.24) and (1.2.25). The measure estimates rely on positivity
xi arguments for the variation of parameter dependent families of self-adjoint matrices, see Section 5.8. These properties (see (9.1.8)) are verified for the linearized operators obtained along the iteration. The genericity of the nonresonance and nondegeneracy conditions that we require on the potential V .x/ and the coefficient a.x/ in the nonlinearity a.x/u3 C O.u4 /, are finally verified in Chapter 13. The techniques developed above for the NLW equation (0.0.13) could certainly be used to prove a corresponding result for NLS equations. However we have decided to focus on the NLW equation because, as explained above, there are fewer results available in this case. This context seems to better illustrate the advantages of the present approach in comparison to that of reducibility. A feature of the monograph is to present the proofs, techniques, and ideas developed in a self-contained and expanded manner, with the hope to enhance further developments. We also aim to describe the connections of this result with previous works in the literature. The techniques developed in this monograph have deep connections with those used in Anderson-localization theory and we hope that the detailed presentation in this manuscript of all technical aspects of the proofs will allow a deeper interchange between the Anderson-localization and KAM for PDEs scientific communities. Massimiliano Berti, Philippe Bolle
Acknowledgements This research was partially supported by PRIN 2015 “Variationalmethods with applications to problems in mathematical physics and geometry” and by ANR-15-CE40-0001 “BEKAM (Beyond KAM theory)”. The authors thank Thomas Kappeler and the referees for several suggestions that improved the presentation of the manuscript.
Contents
1 Introduction . . . . . . . . . . . . . 1.1 Main result and historical context 1.2 Statement of the main results . . 1.3 Basic notation . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. 1 . 1 . 7 . 16
2 KAM for PDEs and strategy of proof . . . . . . . . . . . . . . . . . . 2.1 The Newton–Nash–Moser algorithm . . . . . . . . . . . . . . . . . 2.2 The reducibility approach to KAM for PDEs . . . . . . . . . . . . . 2.2.1 Transformation laws and reducibility . . . . . . . . . . . . . 2.2.2 Perturbative reducibility . . . . . . . . . . . . . . . . . . . . 2.3 Reducibility results . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 KAM for 1-dimensional NLW and NLS equations with Dirichlet boundary conditions . . . . . . . . . . . . . . 2.3.2 KAM for 1-dimensional NLW and NLS equations with periodic boundary conditions . . . . . . . . . . . . . . 2.3.3 Space multidimensional PDEs . . . . . . . . . . . . . . . . 2.3.4 1-dimensional quasi- and fully nonlinear PDEs, water waves 2.4 The multiscale approach to KAM for PDEs . . . . . . . . . . . . . 2.4.1 Time-periodic case . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Quasiperiodic case . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 The multiscale analysis of Chapter 5 . . . . . . . . . . . . . 2.5 Outline of the proof of Theorem 1.2.1 . . . . . . . . . . . . . . . .
. . . . . . . .
30 32 34 41 44 47 51 57
3 Hamiltonian formulation . . . . . . . . . 3.1 Hamiltonian form of NLW equation . 3.2 Action-angle and normal variables . . 3.3 Admissible Diophantine directions !N "
19 19 22 22 23 27
. 27
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
69 69 71 73
4 Functional setting . . . . . . . . . . . . . . . 4.1 Phase space and basis . . . . . . . . . . . 4.2 Linear operators and matrix representation 4.3 Decay norms . . . . p . . . . . . . . . . . 4.4 Off-diagonal decay of C V .x/ . . . 4.5 Interpolation inequalities . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
77 77 79 85 96 105
5 Multiscale Analysis . . . . . . . . . 5.1 Multiscale proposition . . . . . . 5.2 Matrix representation . . . . . . 5.3 Multiscale step . . . . . . . . . 5.4 Separation properties of bad sites
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
113 113 117 120 123
. . . . .
. . . . .
. . . . .
. . . .
. . . . . .
. . . . .
. . . . .
xiv
Contents
5.5 Definition of the sets ƒ."I ; Xr; / . . . . . . . . . 2 N 5.6 Right inverse of ŒLr;2N N for N N < N0 . . . . 2 5.7 Inverse of Lr;;N for N N0 . . . . . . . . . . . 5.7.1 The set ƒ."I 1; Xr; / is good at any scale . . 5.7.2 Inverse of Lr;;N for N N02 . . . . . . . 5.8 Measure estimates . . . . . . . . . . . . . . . . . . 5.8.1 Preliminaries . . . . . . . . . . . . . . . . . 5.8.2 Measure estimate of ƒ n GQ . . . . . . . . . z n G0 1 for N N02 5.8.3 Measure estimate of ƒ N; z n G0 5.8.4 Measure estimate of ƒ N
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
134 136 146 146 151 153 153 155 156
2
1 k; 2
for k 1 . . . . . . . . . . . 157
5.8.5 Stability of the L2 good parameters under variation of Xr; . . 161 5.8.6 Conclusion: proof of (5.1.17) and (5.1.18) . . . . . . . . . . . 162 6
Nash–Moser theorem . . . . . . . . . . . . . 6.1 Statement . . . . . . . . . . . . . . . . . 6.2 Shifted tangential frequencies up to O."4 / 6.3 First approximate solution . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
165 165 171 173
7
Linearized operator at an approximate solution 7.1 Symplectic approximate decoupling . . . . . 7.2 Proof of Proposition 7.1.1 . . . . . . . . . . . 7.3 Proof of Lemma 7.1.2 . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
175 175 179 185
8
Splitting of low-high normal subspaces up to O."4 / 8.1 Choice of M . . . . . . . . . . . . . . . . . . . . 8.2 Homological equations . . . . . . . . . . . . . . 8.3 Averaging step . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
189 189 191 195
9
Approximate right inverse in normal directions . . . . . . . . . . . . . . 199 9.1 Split admissible operators . . . . . . . . . . . . . . . . . . . . . . . . 199 9.2 Approximate right inverse . . . . . . . . . . . . . . . . . . . . . . . . 201
10 Splitting between low-high normal subspaces . . . . . . . . . 10.1 Splitting step and corollary . . . . . . . . . . . . . . . . . 10.2 The linearized homological equation . . . . . . . . . . . . 10.3 Solution of homological equations: proof of Lemma 10.2.2 10.4 Splitting step: proof of Proposition 10.1.1 . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
205 205 218 220 239
11 Construction of approximate right inverse . . . . . 11.1 Splitting of low-high normal subspaces . . . . . . 11.2 Approximate right inverse of LD . . . . . . . . . 11.3 Approximate right inverse of L D LD C % . . . . 11.4 Approximate right inverse of !N " @' J.A0 C /
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
247 247 248 260 263
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
xv
Contents
12 Proof of the Nash–Moser Theorem . 12.1 Approximate right inverse of L! 12.2 Nash–Moser iteration . . . . . . 12.3 C 1 solutions . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
271 271 275 291
13 Genericity of the assumptions . . . . . . . . . . . . . . . . . . . . . . . . 295 13.1 Genericity of nonresonance and nondegeneracy conditions . . . . . . 295 A Hamiltonian and reversible PDEs . . . . . . . . A.1 Hamiltonian and reversible vector fields . . . A.2 Nonlinear wave and Klein–Gordon equations A.3 Nonlinear Schr¨odinger equation . . . . . . . A.4 Perturbed KdV equations . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
311 311 312 315 316
B Multiscale Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 B.1 Matrices with off-diagonal decay . . . . . . . . . . . . . . . . . . . . 319 B.2 Multiscale step proposition . . . . . . . . . . . . . . . . . . . . . . . 326 C Normal form close to an isotropic torus . . . . . . . . . . . . . . . . . . 337 C.1 Symplectic coordinates near an invariant torus . . . . . . . . . . . . . 337 C.2 Symplectic coordinates near an approximately invariant torus . . . . . 343 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Chapter 1
Introduction In this introductory chapter we present in detail the main results of this monograph (Theorems 1.2.1 and 1.2.3) concerning the existence of quasiperiodic solutions of multidimensional nonlinear wave (NLW) equations with periodic boundary conditions, with a short description of the related literature. A comprehensive introduction to KAM theory for PDEs is provided in Chapter 2.
1.1 Main result and historical context We consider autonomous NLW equations ut t u C V .x/u C g.x; u/ D 0;
x 2 Td WD Rd =.2Z/d ;
(1.1.1)
in any space dimension d 1, where V .x/ 2 C 1 .Td ; R/ is a real-valued multiplicative potential and the nonlinearity g 2 C 1 .Td R; R/ has the form g.x; u/ D a.x/u3 C O u4 (1.1.2) with a.x/ 2 C 1 .Td ; R/. We require that the elliptic operator C V .x/ be positive definite, namely there exist ˇ > 0 such that C V .x/ > ˇ Id :
(1.1.3)
Condition (1.1.3) is satisfied, in particular, if the potential V .x/ 0 and V .x/ 6 0. In this monograph we prove the existence of small-amplitude time-quasiperiodic solutions of (1.1.1). Recall that a solution u.t; x/ of (1.1.1) is time quasiperiodic with frequency vector ! 2 R , 2 NC D f1; 2; : : :g, if it has the form u.t; x/ D U.!t; x/ where U W T Td ! R is a C 2 -function and ! 2 R is a nonresonant vector, namely ! ` ¤ 0;
8` 2 Z n f0g :
If D 1, a solution of this form is time periodic with period 2=!. Small-amplitude solutions of the NLW equation (1.1.1) are approximated, at a first degree of accuracy, by solutions of the linear wave equation ut t u C V .x/u D 0;
x 2 Td :
(1.1.4)
2
1 Introduction
In the sequel, for any f; g 2 L2 .Td ; C/, we denote by .f; g/L2 the standard L2 inner product Z .f; g/L2 D f .x/g.x/ dx : Td
There is a L2 orthonormal basis f‰j gj 2N , N D f0; 1; : : :g, of L2 .Td / such that each ‰j is an eigenfunction of the Schr¨odinger operator C V .x/. More precisely, . C V .x//‰j .x/ D 2j ‰j .x/ ;
(1.1.5)
where the eigenvalues of C V .x/ 0 < ˇ 20 21 2j ;
j > 0;
j ! C1 ;
(1.1.6)
are written in increasing order and with multiplicities. The solutions of the linear wave equation (1.1.4) are given by the linear superpositions of normal modes oscillations, X ˛j cos j t C j ‰j .x/; ˛j ; j 2 R : (1.1.7) j 2N
All of the solutions (1.1.7) of (1.1.4) are periodic, or quasiperiodic, or almost-periodic in time, with linear frequencies of oscillations j , depending on the resonance properties of j (which depend on the potential V .x/) and how many normal mode amplitudes ˛j are not zero. In particular, if ˛j D 0 for any index j except a finite set S (tangential sites), and the frequency vector N WD .j /j 2S is nonresonant, then the linear solutions (1.1.7) are quasiperiodic in time. The main question we pose is the following: Do small-amplitude quasiperiodic solutions of the NLW equation (1.1.1) exist? The main results presented in this monograph, Theorems 1.2.1 and 1.2.3, state that: Small-amplitude quasiperiodic solutions (1.1.7) of the linear wave equation (1.1.4), which are supported on finitely many indices j 2 S, persist, slightly deformed, as quasiperiodic solutions of the NLW equation (1.1.1), with a frequency vector ! close to , N for “generic” potentials V .x/ and coefficients a.x/ and “most” amplitudes .˛j /j 2S . The potentials V .x/ and the coefficients a.x/ such that Theorem 1.2.1 holds are generic in a very strong sense; in particular they are C 1 -dense, according to Definition 1.2.2, in the set P \ C 1 Td C 1 Td ; ˚ where P WD V .x/ 2 H s Td W C V .x/ > 0 (see (1.2.36)). Theorem 1.2.1 is a Kolmogorov–Arnold–Moser (KAM) type perturbative result. We construct recursively an embedded invariant torus T 3 ' 7! i.'/ taking values in
1.1 Main result and historical context
3
the phase space (that we describe carefully below), supporting quasiperiodic solutions of (1.1.1) with frequency vector ! (to be determined). We employ a modified Nash– Moser iterative scheme for the search for zeros
F .I i / D 0 of a nonlinear operator F acting on scales of Sobolev spaces of maps i , depending on a suitable parameter (see Chapter 6). As in a Newton scheme, the core of the problem consists in the analysis of the linearized operators di F .I i / at any approximate solution i at each step of the iteration (see Section 2.1). The main task is to prove that di F .I i/ admits an approximate inverse, for most values of the parameters, that satisfies suitable quantitative tame estimates in high Sobolev norms. The approximate inverse will be unbounded, i.e., it loses derivatives, due to the presence of small divisors. As we shall describe in detail in Section 2.5, the construction of an approximate inverse for the linearized operators obtained from (1.1.1) is a subtle problem. Major difficulties come from complicated resonance phenomena between the frequency vector ! of the expected quasiperiodic solutions and the multiple normal mode frequencies of oscillations, shifted by the nonlinearity, and the fact that the normal mode eigenfunctions ‰j .x/ are not localized close to the exponentials. We now make a short historical introduction to KAM theory for PDEs, that we shall expand upon in Chapter 2. As we already mentioned in the preface, in small divisors problems for PDEs, as (1.1.1), the space dimension d D 1 or d 2 makes a fundamental difference due to the very different properties of the eigenvalues and eigenfunctions of the Schr¨odinger operator C V .x/ on Td for d D 1 and d 2. If the space dimension d D 1, we also call @xx C V .x/ a Sturm–Liouville operator. The first KAM existence results of quasiperiodic solutions were proved by Kuksin [100] (see also [102] and Wayne [126]) for 1-dimensional wave and Schr¨odinger (NLS) equations on the interval x 2 Œ0; with Dirichlet boundary conditions and analytic nonlinearities (see (2.3.1) and (2.3.2)). These pioneering theorems were limited to Dirichlet boundary conditions because the eigenvalues 2j of the Sturm–Liouville operator @xx C V .x/ had to be simple. Indeed the KAM scheme in [102] and [126] (see also [112]), reduces the linearized equations along the iteration to a diagonal form, with coefficients constant in time. This process requires second-order Melnikov nonresonance conditions, which concern lower bounds for differences among the linear frequencies. In these papers the potential V .x/ is used as a parameter to impose nonresonance conditions. Once the linearized PDEs obtained along the iteration are reduced to diagonal, constant in time form, it is easy to prove that the corresponding linear operators are invertible, for most values of the parameters, with good estimates of their inverses in high norms (with of course a loss of derivatives). We refer to Section 2.2 for a more detailed explanation of the KAM reducibility approach.
4
1 Introduction
Subsequently, these results have been extended by P¨oschel [113] for parameter independent nonlinear Klein–Gordon equations like (2.3.10) and by Kuksin and P¨oschel [103] for NLS equations like (2.3.9). A major novelty of these papers was the use of Birkhoff normal form techniques to verify (weak) nonresonance conditions among the perturbed frequencies, tuning the amplitudes of the solutions as parameters. In the case x 2 T, the eigenvalues of the Sturm–Liouville operator @xx C V .x/ are asymptotically double and therefore the previous second-order Melnikov nonresonance conditions are violated. In this case, the first existence results were obtained by Craig and Wayne [54] for time-periodic solutions of analytic nonlinear Klein– Gordon equations (see also [52] and [20] for completely resonant wave equations) and then extended by Bourgain [36] for time-quasiperiodic solutions. The proofs are based on a Lyapunov–Schmidt bifurcation approach and a Nash–Moser implicit function iterative scheme. The key point of these papers is that there is no diagonalization of the linearized equations at each step of the Nash–Moser iteration. The advantage is to require only minimal nonresonance conditions that are easily verified for PDEs also in presence of multiple frequencies (the second-order Melnikov nonresonance conditions are not used). On the other hand, a difficulty of this approach is that the linearized equations obtained along the iteration are variable coefficients PDEs. As a consequence it is hard to prove that the corresponding linear operators are invertible with estimates of their inverses in high norms, sufficient to imply the convergence of the iterative scheme. Relying on a “resolvent” type analysis inspired by the work of Fr¨olich and Spencer [69] in the context of Anderson localization, Craig and Wayne [54] were able to solve this problem for time-periodic solutions in d D 1, and Bourgain in [36] also for quasiperiodic solutions. Key properties of this approach are: (i) Separation properties between singular sites, namely the Fourier indices .`; j / of the small divisors j.! `/2 j 2 j C in the case of (NLW); (ii) Localization of the eigenfunctions of the Sturm–Liouville operator @xx C V .x/ with respect to the exponential basis .eikx /k2Z , namely that the Fourier bj /k converge rapidly to zero when jjkjj j ! 1. This property coefficients .‰ is always true if d D 1 (see, e.g., [54]). Property (ii) implies that the matrix that represents, in the eigenfunction basis, the multiplication operator defined by an analytic (resp. Sobolev) function has an exponentially (resp. polynomially) fast decay off the diagonal. Then the separation properties (i) imply a very weak interaction between the singular sites. If the singular sites were too many, the inverse operator would be too unbounded, preventing the convergence of the iterative scheme. This approach is particularly promising in the presence of multiple normal mode frequencies and it constitutes the basis of the present monograph. We describe it in more detail in Section 2.4. Later, Chierchia and You [48] were able to extend the KAM reducibility approach to prove the existence and stability of small-amplitude quasiperiodic solutions of 1-dimensional NLW equations on T with an external potential. We also mention
1.1 Main result and historical context
5
the KAM reducibility results in Berti, Biasco and Procesi [18, 19] for 1-dimensional derivative wave equations. When the space dimension d is greater than or equal to 2, major difficulties are: 1. The eigenvalues 2j of CV .x/ in (1.1.5) may be of high multiplicity, or not sufficiently separated from each other in a suitable quantitative sense, required by the perturbation theory developed for 1-dimensional PDEs; 2. The eigenfunctions ‰j .x/ of C V .x/ may be not localized with respect to the exponentials, see [64]. As discussed in the preface, if d 2, the first KAM existence result for NLW equations has been proved for time-periodic solutions by Bourgain [37], see also the extensions in [22, 26, 76]. Concerning quasiperiodic solutions in dimension d greater than or equal to 2, the first existence result was proved by Bourgain in Chapter 20 of [42]. It deals with wave type equations of the form ut t u C M u C "F 0 .u/ D 0 ; where M D Op.k / is a Fourier multiplier supported on finitely many sites E Zd , i.e., k D 0, 8k 2 Zd n E. The k , k 2 E, are used as external parameters, and F is a polynomial nonlinearity, with F 0 denoting the derivative of F . Note that the linear equation ut t u C M u D 0 is diagonal in the exponential basis eikx , k 2 Zd , unlike the linear wave equation (1.1.4). We also mention the paper by Wang [124] for the completely resonant NLS equation (2.3.18) and the Anderson localization result of Bourgain and Wang [45] for time-quasiperiodic random linear Schr¨odinger and wave equations. As already mentioned, a major difficulty of this approach is that the linearized equations obtained along the iteration are PDEs with variable coefficients. A key property that plays a fundamental role in [42] (as well as in previous papers such as [39] for NLS) for proving estimates for the inverse of linear operators ˘N .! @' /2 C M C "b.'; x/ jH ; N
(see (0.0.11)) is that the matrix that represents the multiplication operator for a smooth function b.x/ in the exponential basis feikx g, k 2 Zd , has a sufficiently fast offdiagonal decay. Indeed the multiplication operator is represented in Fourier space by the Toeplitz matrix .bOkk 0 /k;k 0 2Zd , with entries given by the Fourier coefficients bOK of the function b.x/, constant on the diagonal k k 0 D K. The smoother the function b.x/, the faster the decay of bOkk 0 as jk kj ! C1. We refer to Section 2.4 for more explanations on this approach. Weaker forms of this property, as for example those required in the work of Berti, Corsi, and Procesi [27] and Berti and Procesi [33] may be sufficient for dealing with the eigenfunctions of on compact Lie groups. However, in general, the elements
6
1 Introduction
.‰j ; b.x/‰j 0 /L2 of the matrix that represents the multiplication operator with respect to the basis of the eigenfunctions ‰j .x/ of C V .x/ on Td (see (1.1.5)) do not decay sufficiently fast if d 2. This was proved by Feldman, Kn¨orrer, and Trubowitz in [64] and it is the difficulty mentioned above in item 2. We remark that weak properties of localization have been proved by Wang [123] in d D 2 for potentials V .x/, which are trigonometric polynomials. In the present monograph, we shall not use properties of localizations of the eigenfunctions ‰j .x/. A major reason why we are able to avoid the use of such properties is that our Nash–Moser iterative scheme requires only very weak tame estimates for the approximate inverse of the linearized operators as stated in (2.4.6), see the end of Subsection 2.4.3. Such conditions are close to the optimal ones, as a famous counterexample of Lojiaciewitz and Zehnder in [94] shows. The properties of the exponential basis eikx , k 2 Zd , also play a key role for developing the KAM perturbative diagonalization/reducibility techniques. Indeed, no reducibility results are available so far for multidimensional PDEs in presence of a multiplicative potential that is not small. Concerning higher-space dimensional PDEs, we refer to the results in Eliasson and Kuksin [61] for the NLS equation (2.3.16) with a convolution potential on Td used as a parameter, Geng and You [72] and Eliasson, Gr´ebert and Kuksin [59] for beam equations with a constant mass potential, Procesi and Procesi [117] for the completely resonant NLS equation (2.3.18), Gr´ebert and Paturel [80] for the Klein–Gordon equation (2.3.19) on Sd , and Gr´ebert and Paturel [81] for multidimensional harmonic oscillators. On the other hand, no reducibility results for NLW on Td are known so far. Actually, a serious difficulty that appears is the following: the infinitely many secondorder Melnikov nonresonance conditions required by the KAM-diagonalization approach are strongly violated by the linear-unperturbed frequencies of oscillations of the Klein–Gordon equation ut t u C mu D 0 ; see [58]. A key difference with respect to the Schr¨odinger equation is that the linear frequencies of the wave equations are jkj, k 2 Zd , while for the NLS and the beam equations, they are jkj2 , respectively jkj4 , and jkj2 , jkj4 are integers. Also, for the multidimensional harmonic oscillator, the linear frequencies are, up to a translation, integer numbers. Although no reducibility results are known so far for the NLW equation, a result of “almost” reducibility for linear quasiperiodically forced Klein–Gordon equations has been presented in [57] and [58]. The existence of quasiperiodic solutions for wave equations on Td with a timequasiperiodic forcing nonlinearity of class C 1 or C q with q large enough, ut t u C V .x/u D "f .!t; x; u/;
x 2 Td ;
(1.1.8)
has been proved in Berti and Bolle [23], extending the multiscale approach of Bourgain [42]. The forcing frequency vector !, which in [23] is constrained to a fixed direction ! D !, N with 2 Œ1=2; 3=2, plays the role of an external parameter.
1.2 Statement of the main results
7
In [27] a corresponding result has been extended for NLW equations on compact Lie groups, in [35] for Zoll manifolds, in [31] for general flat tori, and in [51] for forced Kirkhoff equations. The existence of quasiperiodic solutions for autonomous nonlinear Klein–Gordon equations ut t u C u C upC1 C h:o:t: D 0;
p 2 N;
x 2 Td ;
(1.1.9)
have been recently presented by Wang [125], relying on a bifurcation analysis to study the modulation of the frequencies induced by the nonlinearity upC1 , and multiscale methods of [42] for implementing a Nash–Moser iteration. The result proves the continuation of quasiperiodic solutions supported on “good” tangential sites. Among all the works discussed above, the papers [23] and [24] on forced NLW and NLS equations are related most closely to the present monograph. The passage to prove KAM results for autonomous nonlinear wave equations with a multiplicative potential as (1.1.1) is a nontrivial task. It requires a bifurcation analysis that distinguishes the tangential directions where the major part of the oscillation of the quasiperiodic solutions takes place, and the normal ones, see the form (1.2.31) of the quasiperiodic solutions proved in Theorem 1.2.1. When the multiplicative potential V .x/ changes, both the tangential and the normal frequencies vary simultaneously in an intricate way (unlike the case of the convolution potential). This makes it difficult to verify the nonresonance conditions required by the Nash–Moser iteration. In particular, the choice of the parameters adopted in order to fulfill all these conditions is relevant. In this monograph we choose any finite set S NC of tangential sites, we fix the potential V .x/ and the coefficient function a.x/ appearing in the nonlinearity (1.1.2) (in such a way that generic nonresonance and nondegeneracy conditions hold, see Theorem 1.2.3), and then we prove, in Theorem 1.2.1, the existence of quasiperiodic solutions of (1.1.1) for most values of the one dimensional internal parameter introduced in (1.2.24). The parameter amounts just to a time rescaling of the frequency vector !. We also deduce a density result for the frequencies of the quasiperiodic solutions close to the unperturbed vector . N We shall explain in more detail the choice of this parameter in Section 2.5.
1.2 Statement of the main results In this section we state in detail the main results of this monograph, which are Theorems 1.2.1 and 1.2.3. Under the rescaling u 7! "u, " > 0, the equation (1.1.1) is transformed into the NLW equation ut t u C V .x/u C "2 g."; x; u/ D 0 (1.2.1) with the C 1 nonlinearity g."; x; u/ WD "3 g.x; "u/ D a.x/u3 C O "u4 :
(1.2.2)
8
1 Introduction
Recall that we list the eigenvalues .2j /j 2N of C V .x/ in increasing order, see (1.1.6), and that we choose a corresponding L2 orthonormal sequence of eigenfunctions f‰j gj 2N . We choose arbitrarily a finite set of indices S N, called the tangential sites. We denote by jSj 2 N the cardinality of S and we list the tangential sites in increasing order, j1 < < jjSj . We look for quasiperiodic solutions of (1.2.1) that are perturbations of normal modes oscillations supported on j 2 S. We denote by N WD .j /j 2S D .j1 ; : : : ; jjSj / 2 RjSj ; the frequency vector of the quasiperiodic solutions X 1=2 q j 2j cos.j t/‰j .x/;
j > 0 ;
j > 0 ;
(1.2.3)
(1.2.4)
j 2S
of the linear wave equation (1.1.4). The components of N are called the unperturbed tangential frequencies and .j /j 2S the unperturbed actions. We shall call the indices in the complementary set Sc WD N n S the “normal” sites, and the corresponding j , j 2 Sc , the unperturbed normal frequencies. Since (1.2.1) is an autonomous PDE, the frequency vector ! 2 RjSj of its expected quasiperiodic solutions u.!t; x/ is an unknown that we introduce as an explicit parameter in the equation, looking for solutions u.'; x/, ' D .'1 ; : : : ; 'jSj / 2 TjSj , of .! @' /2 u u C V .x/u C "2 g."; x; u/ D 0 : (1.2.5) The frequency vector ! 2 RjSj of the expected quasiperiodic solutions of (1.2.1) will be O."2 /-close to the unperturbed tangential frequency vector N in (1.2.3), see (1.2.24)–(1.2.25). Since the NLW equation (1.2.1) is time-reversible (see Appendix A), it makes sense to look for solutions of (1.2.1) that are even in t. Since (1.2.1) is autonomous, additional solutions are obtained from these even solutions by time translation. Thus we look for solutions u.'; x/ of (1.2.5) that are even in '. This induces a small simplification in the proof (see Remark 6.1.1). In order to prove, for " small enough, the existence of solutions of (1.2.1) close to the solutions (1.2.4) of the linear wave equation (1.1.4), we first require nonresonance conditions for the unperturbed linear frequencies j , j 2 N that will be verified by generic potentials V .x/, see Theorem 1.2.3. Diophantine and first-order Melnikov nonresonance conditions. We assume that The tangential frequency vector N in (1.2.3) is Diophantine, i.e., for some constants 0 2 .0; 1/, 0 > jSj 1, 0 ; 8` 2 ZjSj n f0g; h`i WD maxf1; j`jg ; (1.2.6) jN `j h`i0 where j`j WD maxfj`1 j; : : : ; j`jSj jg. Note that (1.2.6) implies, in particular, that the 2j , j 2 S, are simple eigenvalues of C V .x/.
1.2 Statement of the main results
9
The unperturbed first-order Melnikov nonresonance conditions hold: jN ` C j j
0 ; h`i0
8` 2 ZjSj ; j 2 Sc :
(1.2.7)
The nonresonance conditions (1.2.6) and (1.2.7) imply, in particular, that the linear equation (1.1.4) has no other quasiperiodic solutions with frequency , N even in t, except the trivial ones (1.2.4). In order to prove separation properties of the small divisors as required by the multiscale analysis that we perform in Chapter 5, we require, as in [23], that The tangential frequency vector N in (1.2.3) satisfies the quadratic Diophantine condition ˇ ˇ ˇ ˇ X jSj.jSjC1/ 0 ˇ ˇ nC p ; 8.n; p/ 2 ZZ 2 nf.0; 0/g : (1.2.8) ˇ ij i j ˇ ˇ ˇ hpi 0 i;j 2S;i j
The nonresonance conditions (1.2.6), (1.2.7) and (1.2.8) are assumptions on the potential V .x/ that are “generic” in the sense of Kolmogorov measure. In [96], (1.2.6) and (1.2.7) are proved to hold for most potentials. Genericity results are stated in Theorem 1.2.3 and proved in Chapter 13. We emphasize that throughout the monograph the constant 0 2 .0; 1/ in (1.2.6), (1.2.7) and (1.2.8) is regarded as fixed and we shall often omit to track its dependence in the estimates. Birkhoff matrices. We are interested in quasiperiodic solutions of (1.2.1) that bifurcate for small " > 0 from a solution of the form (1.2.4) of the linear wave equation. In order to prove their existence, it is important to know precisely how the tangential and the normal frequencies change with respect to the unperturbed actions .j /j 2S , under the effect of the nonlinearity "2 a.x/u3 C O."3 u4 /. This is described in terms of the Birkhoff matrices j 1 1 k 1 A WD 1 G ; B WD G ; (1.2.9) j j j k k k c j;k2S
j 2S ;k2S
where, for any j; k 2 N, (
j Gk
WD
j Gk .V; a/
.3=2/ ‰j2 ; a.x/‰k2 L2 ; WD .3=4/ ‰j2 ; a.x/‰j2 L2 ;
j ¤k; j Dk
(1.2.10)
and ‰j .x/ are the eigenfunctions of C V .x/ introduced in (1.1.5). Note that the j matrix .Gk / depends on the coefficient a.x/ and the eigenfunctions ‰j , and thus on the potential V .x/. The jSjjSj symmetric matrix A is called the “twist” matrix. The matrices A ; B describe the shift of the tangential and normal frequencies induced by the nonlinearity a.x/u3 as they appear in the fourth order Birkhoff normal form of
10
1 Introduction
(1.1.1) and (1.1.2). Actually, we prove in Section 6.2 that up to terms O."4 /, the tangential frequency vector ! of a small-amplitude quasiperiodic solution of (1.1.1) and (1.1.2) close to (1.2.4) is given by the action-to-frequency map 7! N C "2 A ;
2 RjSj C;
RC WD .0; C1/ :
(1.2.11)
On the other hand the perturbed normal frequencies are shifted by the matrix B as described in Lemma 8.3.2. We assume that (Twist condition)
det A ¤ 0 ;
(1.2.12)
and therefore the action-to-frequency map in (1.2.11) is invertible. The nondegeneracy, or twist condition (1.2.12), is satisfied for generic V .x/ and a.x/, as stated in Theorem 1.2.3 (see in particular Corollary 13.1.10 and Remark 13.1.11). Second-order Melnikov nonresonance conditions. We also assume second-order Melnikov nonresonance conditions that concern only finitely many unperturbed normal frequencies. We must first introduce an important decomposition of the normal indices j 2 Sc . Note that since j ! C1, the indices j 2 Sc such that N j < 0 are finitely many. Let g 2 R be defined by j .BA 1 / n o g WD min j BA 1 N j ; j 2 Sc : (1.2.13) We decompose the set of normal indices as Sc D F [ G; where
G WD Sc n F ;
o n ˇ ˇ F WD j 2 Sc W ˇj BA 1 N j ˇ g ; n o G WD j 2 Sc W j BA 1 N j > g :
(1.2.14)
(1.2.15)
The set F is always finite, and is empty if g < 0. The relevance of the decomposition (1.2.14) of the normal sites, concerns the variation of the normal frequencies with respect to the length of the tangential frequency vector, as we describe in (2.5.26) below, see also Lemma 8.1.1. If all the numbers j .BA 1 / N j , j 2 Sc , were positive, then by (2.5.26), the growth rates of the eigenvalues of the linear operators that we need to invert in the Nash–Moser iteration would all have the same sign. This would allow us to obtain measure estimates by simple arguments, as in [23] and [24], where the forced case is considered. In general g > 0 and we shall be able to decouple, for most values of the parameter , the linearized operators obtained at each step of the nonlinear Nash–Moser iteration, acting in the normal subspace HS? , along HF and its orthogonal HF? D HG , defined in (2.5.12). We discuss the relevance of this decomposition in Section 2.5.
1.2 Statement of the main results
11
We assume the following Unperturbed “second-order Melnikov” nonresonance conditions (part 1): jN ` C j k j
0 ; h`i0
8.`; j; k/ 2 ZjSj F Sc ;
(1.2.16)
.`; j; k/ ¤ .0; j; j / ; jN ` C j C k j
0 ; h`i0
8.`; j; k/ 2 ZjSj F Sc :
(1.2.17)
Note that (1.2.16) implies, in particular, that the finitely many eigenvalues 2j of C V .x/, j 2 F, are simple (clearly all the other eigenvalues 2j , j 2 G, could be highly degenerate). In order to verify a key positivity property for the variations of the restricted linearized operator with respect to (Lemma 10.3.8), we assume further Unperturbed second-order Melnikov nonresonance conditions (part 2): jN ` C j k j
0 ; h`i0
8.`; j; k/ 2 ZjSj .M n F/ Sc ;
(1.2.18)
.`; j; k/ ¤ .0; j; j / ; jN ` C j C k j where
0 ; h`i0
8.`; j; k/ 2 ZjSj .M n F/ Sc ;
˚ M WD j 2 Sc W jj j C1
(1.2.19)
(1.2.20)
and the constant C1 WD C1 .V; a/ > 0 is taken large enough so that F M and (8.1.8) holds. Clearly the conditions (1.2.16)–(1.2.19) could have been written together, requiring such conditions for j 2 M without distinguishing the cases j 2 F and j 2 M n F. However, for conceptual clarity, in view of their different role in the proof, we prefer to state them separately. The above conditions (1.2.16)–(1.2.19) on the unperturbed frequencies allow us to perform one step of averaging and so to diagonalize, up to O."4 /, the normal frequencies supported on M, see Proposition 8.3.1. This is the only step where conditions (1.2.18) and (1.2.19) play a role. Conditions (1.2.16) and (1.2.17) are also used in the splitting step of Chapter 10, see Lemma 10.3.3. Conditions (1.2.16)–(1.2.19) depend on the potential V .x/ and also on the coefficient a.x/, because the constant C1 in (1.2.20) (hence the set M) depends on kakL1 and kA1 k. Given .V0 ; a0 / such that the matrix A defined in (1.2.9) is invertible and s > d=2, the set M can be chosen constant in some open neighborhood U of .V0 ; a0 / in H s . In U , conditions (1.2.16)–(1.2.19) are generic for V .x/, as it is proved in Chapter 13 (see Theorem 1.2.3).
12
1 Introduction
Nondegeneracy conditions. We also require the following finitely many Nondegeneracy conditions: j BA 1 N j k BA 1 N k ¤ 0 ; 8j; k 2 F; j ¤ k ; 1 1 j BA N j C k BA N k ¤ 0; 8j; k 2 F ;
(1.2.21) (1.2.22)
where A and B are the Birkhoff matrices defined in (1.2.9). Such assumptions are similar to the nondegeneracy conditions required for the continuation of elliptic tori for finite-dimensional systems in [56, 111] and for PDEs in [17, 103, 113]. Note that the finitely many nondegeneracy conditions (1.2.21) depend on V .x/ and a.x/ and we prove in Theorem 1.2.3 that they are generic in .V; a/. Parameter. We now introduce the one-dimensional parameter that we shall use to perform the measure estimates. In view of (1.2.11) the frequency vector ! has to belong to the cone of the “admissible” frequencies N C "2 A .RjSj C /, more precisely we require that ! belongs to the image
1 2 A WD N C " A W j 4; 8j 2 S RjSj (1.2.23) 2 of the compact set of actions 2 Œ1=2; 4jSj under the approximate action-to-frequency map (1.2.11). Then, in view of the method that we shall use for the measure estimates for the linearized operator, we look for quasiperiodic solutions with frequency vector ! D 1 C "2 !N " ; 2 ƒ WD Œ0 ; 0 ; (1.2.24) constrained to a fixed admissible direction !N " WD N C "2 ;
2 A Œ1; 2jSj :
(1.2.25)
N because D 0 might not belong to Note that in general we can not choose !N " D , A .Œ1; 2jSj/. We fix below so that the Diophantine conditions (1.2.29) and (1.2.30) hold. In (1.2.24) there exists 0 > 0 small, independent of " > 0 and of 2 A .Œ1; 2jSj /, such that, 8 2 ƒ WD Œ0 ; 0 ; ! D 1 C "2 !N " 2 A (1.2.26) are still admissible (see (1.2.23)) and, using (1.2.25), 1 C "2 !N " D N C "2 A ./ ” WD ./ D 1 C "2 A 1 C A 1 N :
(1.2.27)
We shall use the one-dimensional “parameter” 2 ƒ WD Œ0 ; 0 in order to verify all the nonresonance conditions required for the frequency vector ! in the proof of Theorem 1.2.1.
1.2 Statement of the main results
13
For " small and fixed, we take the vector such that the direction !N " in (1.2.25) still verifies Diophantine conditions like (1.2.6) and (1.2.8) with the different exponents 1 WD 0 =2; 1 WD 3 0 C jSj.jSj C 1/ C 5 > 0 ; (1.2.28) namely 1 ; 8` 2 ZjSj n f0g ; j!N " `j h`i1 ˇ ˇ ˇ ˇ X 1 ˇ ˇ n C p . ! N / . ! N / ; ˇ ij " i " jˇ ˇ ˇ hpi1
(1.2.29) 8.n; p/ 2 Z Z
jSj.jSjC1/ 2
n f.0; 0/g :
1i j jSj
(1.2.30) This is possible by Lemma 3.3.1. Actually, the vector !N " D N C "2 satisfies (1.2.29) and (1.2.30) for all 2 A .Œ1; 2jSj/ except a small set of measure O."/. In (1.2.30), we denote, for i D 1; : : : ; jSj, the i -component .!N " /i D ji C"2 i , where j1 ; : : : ; jjSj are the tangential sites ordered according to (1.2.3). Main result. We may now state in detail the main result of this monograph, concerning the existence of quasiperiodic solutions of the NLW equation (1.1.1). Let Hs denote the standard Sobolev space Hs .TjSj Td I R/ of functions uW TjSj d T ! R, with norm k ks , see (1.3.1). Theorem 1.2.1 (Quasiperiodic solutions for the NLW equation (1.1.1)). Let a be a function in C 1 .Td ; R/. Take V in C 1 .Td ; R/ such that C V .x/ ˇId for some ˇ > 0 (see (1.1.3)) and let f‰j gj 2N be an L2 basis of eigenfunctions of C V .x/, with eigenvalues 2j written in increasing order and with multiplicity as in (1.1.6). Fix finitely many tangential sites S N. Assume the following conditions: (i) The unperturbed frequency vector N D .j /j 2S 2 RjSj in (1.2.3) satisfies the Diophantine conditions (1.2.6) and (1.2.8); (ii) The unperturbed first- and second-order Melnikov nonresonance conditions (1.2.7) and (1.2.16)–(1.2.19) hold; (iii) The Birkhoff matrix A defined in (1.2.9) satisfies the twist condition (1.2.12); (iv) The finitely many nondegeneracy conditions (1.2.21) and (1.2.22) hold. Finally, fix a vector !N " WD N C "2 ;
2 A Œ1; 2jSj RjSj ;
such that the Diophantine conditions (1.2.29) and (1.2.30) hold.
14
1 Introduction
Then the following holds: 1. There exists a Cantor-like set G"; ƒ (with ƒ defined as in (1.2.26)) with asymptotically full measure, i.e., ˇ ˇ ˇƒ n G"; ˇ ! 0 as " ! 0 I 2. For all 2 G"; , there exists a solution u"; 2 C 1 .TjSj Td I R/ of the equation (1.2.5), of the form X 1=2 q j 2j cos.'j /‰j .x/ C r" .'; x/ ; (1.2.31) u";.'; x/ D j 2S
which is even in ', where
! D 1 C "2 !N " ;
!N " D N C "2 ;
and WD ./ 2 Œ1=2; 4S is given in (1.2.27). For any s s0 > .jSj C d /=2, the remainder term r" satisfies kr" ks ! 0 as " ! 0. As a consequence "u"; .!t; x/ is a quasiperiodic solution of the NLW equation (1.1.1) with frequency vector ! D .1 C "2 /!N " . Theorem 1.2.1 is a direct consequence of Theorem 6.1.2. Note that the nonresonance conditions (i), (ii) in Theorem 1.2.1 depend on the tangential sites S and the potential V .x/, whereas the nondegeneracy conditions (iii), (iv) depend on S; V .x/, the choice of the basis f‰j gj 2N of eigenvectors of C V .x/, and the coefficient a.x/ in the nonlinear term g.x; u/ D a.x/u3 C O.u4 / in (1.1.2). In Section 2.5 we shall provide a detailed presentation of the strategy of proof of Theorem 1.2.1. Let us now make some comments on this result. 1. Measure estimate of G"; . The speed of convergence of jƒ n G"; j to 0 does not depend on . More precisely ( 0 ; 1 ; 0 ; 1 being fixed) there is a map " 7! b."/, satisfying lim b."/ D 0, such that, for all 2 A .Œ1; 2jSj / with "!0
!N " D N C "2 satisfying the Diophantine conditions (1.2.29) and (1.2.30), it follows that jƒ n G"; j b."/. 2. Density. Integrating in along all possible admissible directions !N " in (1.2.25), we deduce the existence of quasiperiodic solutions of (1.1.1) for a set of frequency vectors ! of positive measure. More precisely, defining the convex subsets of RjSj ,
C2 WD N C RC C1 ; n o C1 WD A Œ1; 2jSj C ƒN WD C I N 2 A Œ1; 2jSj ; 2 ƒ ;
(1.2.32)
the set of the frequency vectors ! of the quasiperiodic solutions of (1.1.1) provided by Theorem 1.2.1 has Lebesgue density 1 at N in C2 , i.e., lim
r!0C
j \ C2 \ B.; N r/j D 1; jC2 \ B.; N r/j
(1.2.33)
1.2 Statement of the main results
15
where B.; N r/ denotes the ball in RjSj of radius r centered at N (see the proof of (1.2.33) after Theorem 6.1.2). Moreover, we restrict ourself to 2 A .Œ1; 2jSj / just to fix ideas, and we could replace this condition by 2 A .Œr; RjSj /, for any 0 < r < R (at the cost of stronger smallness conditions for 0 and " if r is small and R is large). Therefore we could obtain a similar density result with C20 WD N C A .RjSj C / instead of C2 . 3. Lipschitz dependence. The solution u"; is a Lipschitz function of 2 G"; with values in Hs , for any s s0 . 4. Regularity. Theorem 1.2.1 also holds if the nonlinearity g.x; u/ and the potential V .x/ in (1.1.1) are of class C q for q large enough, proving the existence of a solution u"; in the Sobolev space HsN for some finite sN > s0 > .jSj C d /=2 (see Remark 12.2.9). Theorem 1.2.3 below proves that, for any choice of finitely many tangential sites S N, all the nonresonance and nondegeneracy assumptions (i)–(iv) required in Theorem 1.2.1 are verified for generic potentials V .x/ and coefficients a.x/ in the nonlinear term g.x; u/ D a.x/u3 C O.u4 / in (1.1.2). In order to state a precise result, we introduce the following definition. Definition 1.2.2 (C 1 -dense open). Given an open subset U of H s .Td / (resp. H s .Td / H s .Td /) a subset V of U is said to be C 1 -dense open in U if 1. V is open for the topology defined by the H s .Td /-norm, 2. V is C 1 -dense in U , in the sense that, for any w 2 U , there is a sequence .hn / 2 C 1 .Td / (resp. C 1 .Td / C 1 .Td /) such that w C hn 2 V , for all n 2 N, and hn ! 0 in H r for any r 0. Let s > d=2 and define the subset of potentials n o P WD V 2 H s Td W C V .x/ > 0 :
(1.2.34)
The set P is open in H s .Td / and convex, thus connected. Given a finite subset S N of tangential sites, consider the set Gz of potentials V .x/ and coefficients a.x/ such that the nonresonance and nondegeneracy conditions (i)–(iv) required in Theorem 1.2.1 hold, namely n o Gz WD .V; a/ 2 P H s Td W (i)–(iv) in Theorem 1.2.1 hold : (1.2.35) Note that the nondegeneracy properties (iii), (iv) may depend on the choice (1.1.5) of the basis f‰j gj 2N of eigenfunctions of C V .x/ (if some eigenvalues are not simple). In the above definition of Gz, it is understood that properties (i)–(iv) hold for some choice of the basis f‰j gj 2N . Given a subspace E of L2 .Td /, we denote by E ? WD E ?L2 its orthogonal complement with respect to the L2 scalar product.
16
1 Introduction
Theorem 1.2.3 (Genericity). Let s > d=2. The set Gz \ C 1 Td C 1 Td is C 1 -dense in P \ C 1 Td C 1 Td (1.2.36) where P H s .Td / is the open and connected set of potentials V .x/ defined in (1.2.34). More precisely, there is a C 1 -dense open subset G of P H s .Td / and a jSjdimensional linear subspace E of C 1 .Td / such that, for all v2 .x/ 2 E ? \ H s .Td /, a.x/ 2 H s .Td /, the Lebesgue measure (on the finite-dimensional space E ' RjSj ) ˇ˚ ˇˇ ˇ z v 2 EW .v C v ; a/ 2 G n G (1.2.37) ˇ 1 ˇ D 0: 1 2 Theorem 1.2.3 is proved in Chapter 13. For the convenience of the reader, we provide in the next chapter a nontechnical survey of the main methods and results in KAM theory for PDEs.
1.3 Basic notation We collect here some basic notation used throughout the monograph. For k D .k1 ; : : : ; kq / 2 Zq , we set jkj WD maxfjk1 j; : : : ; jkq jg ; and, for .`; j / 2 ZjSj Zd , h`; j i WD max.j`j; jj j; 1/ : For s 2 R we denote the Sobolev spaces
Hs WD Hs .TjSj Td I Cr / ( X WD u.'; x/ D
u`;j ei.`'Cj x/ W u`;j 2 Cr ;
.`;j /2ZjSj Zd
kuk2s WD
X
)
(1.3.1)
ju`;j j2 h`; j i2s < 1
.`;j /2ZjSjCd
and we also use the same notation Hs for the subspace of real-valued functions. Moreover we denote by H s WD Hxs the Sobolev space of functions u.x/ in H s .Td ; C/ and H's the Sobolev space of functions u.'/ in H s .TjSj ; C/. We denote by b WD jSj C d . For s s0 > .jSj C d /=2 ; (1.3.2)
1.3 Basic notation
17
we have the continuous embedding Hs TjSj Td ,! C 0 TjSj Td and each Hs is an algebra with respect to the product of functions. Let E be a Banach space. Given a continuous map uW TjSj ! E, ' 7! u.'/, we denote by b u.`/ 2 E, ` 2 ZjSj , its Fourier coefficients Z 1 b u.`/ WD u.'/ei`' d' ; .2/jSj TjSj and its average 1 hui WD b u.0/ WD .2/jSj
Z TjSj
u.'/ d' :
Given a nonresonant vector ! 2 RjSj , i.e., ! ` ¤ 0, 8` 2 ZjSj n f0g, and a function g.'/ 2 RjSj with zero average, we define the solution h.'/ of ! @' h D g, with zero average, X b g .`/ i`' e : h.'/ D .! @' /1 g WD (1.3.3) i! ` jSj `2Z
nf0g
Let E be a Banach space with norm k kE . Given a function f W ƒ WD Œ0 ; 0 R ! E we define its Lipschitz norm kf kLip WD kf kLip;ƒ WD kf kLip;E WD sup kf kE C jf jlip ; 2ƒ
jf jlip WD jf jlip;ƒ WD jf jlip;E WD
sup 1 ;2 2ƒ;1 ¤2
kf .2 / f .1 /kE : j2 1 j
(1.3.4)
z ƒ ! E is defined only on a subset ƒ z of ƒ we shall still denote If a function f W ƒ by kf kLip WD kf kLip;ƒ z WD kf kLip;E the norm in (1.3.4), where the sup-norm and z without specifying explicitly the domain the Lipschitz seminorm are intended in ƒ, z ƒ. If the Banach space E is the Sobolev space Hs then we denote more simply k kLip;Hs D k kLip;s . If E D R then k kLip;R D k kLip . If A./ is a function, operator, . . . , that depends on a parameter , we shall use the following notation for the partial quotient A A.2 / A.1 / ; WD 2 1
81 ¤ 2 :
(1.3.5)
Given a family of functions, or linear self-adjoint operators A./ on a Hilbert space z we shall use the notation H , defined for all 2 ƒ, d A./ ˇId
”
A ˇId;
z 1 ¤ 2 ; 81 ; 2 2 ƒ;
(1.3.6)
18
1 Introduction
where, for a self-adjoint operator, A ˇId means as usual .Aw; w/H ˇkwk2H ;
8w 2 H :
Given linear operators A; B we denote their commutator by AdA B WD ŒA; B WD AB BA : p p We define DV WD C V .x/ and Dm WD C m for some m > 0.
(1.3.7)
Given x 2 R we denote by dxe the smallest integer greater than or equal to x, and by Œx the integer part of x, i.e., the greatest integer smaller than or equal to x; N D f0; 1; : : :g denote the natural numbers, and NC D f1; : : :g the positive integers; Given L 2 N, we denote by ŒŒ0; L the integers in the interval Œ0; L; We use the notation a .s b to mean a C.s/b for some positive constant C.s/, and a s b means that C1 .s/b a C2 .s/b for positive constants C1 .s/; C2 .s/; Given functions a; bW .0; "0 / ! R we write a."/ b."/
”
lim
"!0
a."/ D 0: b."/
(1.3.8)
In the Monograph we denote by S, F, G, M subsets of the natural numbers N, with N D S [ Sc ;
F [ G D Sc ;
F \ G D ;;
F M:
We refer to Chapter 4 for the detailed notation of operators, matrices, decay norms, etc. For simplicity of notation we may write either 2 RS or 2 RjSj . Throughout the monograph we shall use the letter j to denote a space index: it may be j in N, when we use the L2 basis f‰j gj 2N defined in (1.1.5); j D .j1 ; : : : ; jd / in Zd , when we use the exponential basis feij x gj 2Zd .
Chapter 2
KAM for PDEs and strategy of proof In this chapter we first describe the basic ideas of a Newton–Nash–Moser algorithm to prove the existence of quasiperiodic solutions. The key step consists in the analysis of the linearized operators obtained at each step of the iteration with the aim of proving that they are approximately invertible, for most values of suitable parameters, with quantitative tame estimates for the inverse in high norms. Then we describe the two main approaches that have been developed for achieving this task for PDEs: 1. The “reducibility” approach, which we describe in Section 2.2, with its main applications to KAM theory (Section 2.3); 2. The “multiscale” approach, presented in Section 2.4. Finally, in Section 2.5, we shall provide a detailed account of the proof of Theorem 1.2.1.
2.1 The Newton–Nash–Moser algorithm The classical implicit function theorem is concerned with the solvability of the equation F .x; y/ D 0 (2.1.1) where F W X Y ! Z is a smooth map, X , Y , Z are Banach spaces, and there exists .x0 ; y0 / 2 X Y such that
F .x0 ; y0 / D 0 : If x is close to x0 , we want to solve (2.1.1) finding y D y.x/. The main assumption of the classical implicit function theorem is that the partial derivative .dy F /.x0 ; y0 /W Y ! Z possesses a bounded inverse. Note that in this case, by a simple perturbative argument, .dy F /.x; y/ possesses a bounded inverse for any .x; y/ in some neighborhood of .x0 ; y0 /. However there are many situations where .dy F /.x0 ; y0 / has an unbounded inverse ; for example due to small divisors. An approach to this class of problems has been proposed by Nash [108] for the isometric embedding problem of a Riemannian manifold. Subsequently, Moser [104]
20
2 KAM for PDEs and strategy of proof
proposed an abstract modified algorithm in scales of Banach spaces, with new applications in small divisor problems [105–107]. Further extensions and improvements have been obtained by Zehnder [128, 129], see also [109], and H¨ormander [88, 89], see also Hamilton [85] and the more recent papers [9, 26, 63]. There are many variants to present the Nash–Moser theorems, according to the applications one has in mind. The main idea in order to find a solution of the nonlinear equation F .x; y/ D 0 is to replace the usual Picard iteration method with a modified Newton iterative scheme. One starts with a sufficiently good approximate solution .x0 ; y0 /, i.e., F .x0 ; y0 / is small enough. Then, given an approximate solution y, we look for a better approximate solution of the equation F .x; y/ D 0, y0 D y C h ; by a Taylor expansion (for simplicity we omit to write the dependence on x) F y 0 D F .y C h/ D F .y/ C dy F .y/Œh C O khk2 : (2.1.2) The idea of the classical Newton iterative scheme is to define h as the solution of
F .y/ C dy F .y/Œh D 0 :
(2.1.3)
The main difficulty of this scheme is to prove that dy F .y/ is invertible, for any y in a neighborhood of the unperturbed solution y0 , in order to define the new correction h D .dy F .y//1F .y/ :
(2.1.4)
The invertibility (in a weak sense) of dy F .y/ for y close enough to y0 is not a simple consequence of the existence of an unbounded inverse of dy F .y0 /. In addition, working in scales of Banach spaces, one has to prove that the inverse .dy F .y//1 satisfies suitable bounds in high norms, with a “loss of derivatives”, of course. The loss of derivatives roughly means that to estimate .dy F .y//1 Œh in some norm, one must use a higher norm for h (and possibly for y). On the other hand, the main advantage of the scheme (2.1.2)–(2.1.4) is that it is quadratic, namely, we have F y 0 D O khk2 D O kF .y/k2 : As a consequence, the iterates will converge to the expected solution at a superexponential rate, compensating the divergences in the scheme due to the loss of derivatives. For the analytical aspects of the convergence in scales of Banach spaces of analytic functions, we refer, for example, to Zehnder [109] (Section 6.1). A Newton scheme of this kind was also used by Kolmogorov [99] and Arnold [2] to prove their celebrated persistence result of quasiperiodic solutions in nearly integrable analytic Hamiltonian systems. When one works in scales of Banach spaces of functions with finite differentiability, like C k or Sobolev spaces H s , the Newton scheme (2.1.4) will not converge because the inverse .dy F .y//1 is unbounded, say .dy F .y//1W H s 7! H s for
2.1 The Newton–Nash–Moser algorithm
21
some > 0 (it “loses” derivatives) and therefore, after finitely many steps, the approximate solutions are no longer regular. Later, Moser [104] suggested performing a smoothing of the approximate solutions obtained at each step of the Newton scheme, like a Fourier series truncation with an increasing number of harmonics, see the smoothing operators (5.1.11). The convergence of the iterative scheme is then guaranteed, requiring that the inverse .dy F .y//1 satisfies “tame” estimates. We state them precisely in the case that F acts on a scale .H s /ss0 of Sobolev spaces with norms k ks : there are constants p,
> 0 (loss of derivatives) such that, for any s 2 Œs0 ; S , for all g 2 H sC , .dy F .y//1 g C.s; kyks Cp / kgksC C kgks kyksC : (2.1.5) 0 0 s These tame estimates are sufficient for the convergence of the Nash–Moser iterative scheme, requiring that the initial datum kF .x0 ; y0 /ks1 is small enough for some s1 WD s1 .p; / > s0 . For the analytical aspects of the convergence we refer, for example, to Moser [106], Zehnder [109, Chapter 6], [21], and [26]. Remark 2.1.1 (Parameters). If the nonlinear operator F .I x; y/ depends on some parameter , tame estimates of the form (2.1.5) may hold for “most” values of . Thus the solution y.xI / of the implicit equation F .I x; y/ D 0 exists for most values of the parameters . This is the typical situation that one encounters in small divisor problems, as in the present monograph, see the nonlinear operator (6.1.2). Remark 2.1.2 (Weaker tame estimates). Weaker tame estimates of the form (2.4.5) or (2.4.6) are sufficient for the convergence of the Nash–Moser scheme, see [23, 24, 26]. In this monograph we shall indeed verify tame estimates of this weaker type. Before concluding this section we mention a variant of the above Newton–Nash– Moser scheme (2.1.2)–(2.1.4) (that we shall employ). Zehnder [128] noted that it is sufficient to find only an approximate right inverse of dy F .y/, namely a linear operator T .y/ such that dy F .y/ ı T .y/ Id D O kF .y/k : (2.1.6) Remark that, at a solution y of the equation F .y/ D 0, the operator T .y/ is an exact right inverse of dy F .y/. Then we define the new approximate solution y 0 D y C h;
h WD T .y/F .y/ ;
obtaining, by (2.1.6), that F .y 0 / D F .y/ dy F .y/ ŒT .y/F .y/ C O khk2 D O kF .y/k2 : This is still a quadratic scheme. Naturally, for ensuring the convergence of this iterative scheme in a scale of Banach spaces with finite differentiability, the approximate inverse T .y/ has to satisfy tame estimates as in (2.1.5).
22
2 KAM for PDEs and strategy of proof
The above modification of the Newton–Nash–Moser iterative scheme is useful because the invertibility of dy F .y/ may be a difficult task, but the search for an approximate right inverse T .y/ may be much simpler. In Chapter 7 we shall indeed implement this device for the nonlinear operator F defined in (6.1.2). As we outlined in this section, the core of a Nash–Moser scheme concerns the invertibility of the linearized operators that arise at each step of the iteration with tame estimates like (2.1.5), or (2.4.5) and (2.4.6) for its inverse. In the next sections we present the two major sets of ideas and techniques that have been employed so far to prove the invertibility of the linearized operators that arise in the search for quasiperiodic solutions of PDEs. These are: 1. The “reducibility” approach, which we present in Section 2.2 with its main applications to KAM theory for PDEs (Section 2.3); 2. The “multiscale” approach, described in Section 2.4, which is the basis for the proof of the main result of this monograph, Theorem 1.2.1.
2.2 The reducibility approach to KAM for PDEs The goal of this section is to present the perturbative reducibility approach for a timequasiperiodic linear operator (Subsection 2.2.2) and then describe its main applications to KAM theory for PDEs (Section 2.3).
2.2.1 Transformation laws and reducibility Consider a quasiperiodically time-dependent linear system ut C A.!t/u D 0;
u2H:
(2.2.1)
Here the phase space H may be a finite- or infinite-dimensional Hilbert space with scalar product h ; i; A (smoothly) maps any ' 2 T , 2 NC , to a linear operator A.'/ acting on H ; ! 2 R nf0g is the frequency vector. We suppose that ! is a nonresonant vector, i.e., ! ` ¤ 0, 8` 2 Z n f0g, thus any trajectory .'0 C !t/t 2R of the linear flow spanned by ! is dense in the torus T . Assume that ˆ (smoothly) maps any ' 2 T to an invertible bounded linear operator ˆ.'/W H ! H . Under the quasiperiodically time-dependent transformation u D ˆ.!t/Œv ;
(2.2.2)
vt C B.!t/v D 0
(2.2.3)
B.'/ D ˆ1 .'/.! @' ˆ/.'/ C ˆ.'/1 A.'/ˆ.'/ :
(2.2.4)
system (2.2.1) transforms into
with the new linear operator
2.2 The reducibility approach to KAM for PDEs
23
Remark 2.2.1. Suppose that H is endowed with a symplectic form defined by .u; v/ WD hJ 1 u; vi, 8u; v 2 H , where J is an antisymmetric, nondegenerate operator. If A.!t/ is Hamiltonian, namely A.!t/ D JS.!t/, where S.!t/ is a (possibly unbounded) self-adjoint operator, and ˆ.!t/ is symplectic, then the new operator B.!t/ is Hamiltonian as well, see Lemma 4.2.3. Reducibility. i.e.,
If the operator B in (2.2.3) is a diagonal, time-independent operator, B.!t/ D B D Diagj .bj /
(2.2.5)
in a suitable basis of H , then (2.2.3) reduces to the decoupled scalar linear ordinary differential equations vP j C bj vj D 0 ; (2.2.6) where .vj / denote the coordinates of v in the basis of eigenvectors of B. Then (2.2.3) is integrated in a straightforward way, vj .t/ D ebj t vj .0/ ; and all the solutions of system (2.2.1) are obtained via the change of variable (2.2.2). We say that (2.2.1) has been reduced to constant coefficients by the change of variable ˆ. Remark 2.2.2. If all the bj in (2.2.6) are purely imaginary, then the linear system (2.2.1) is stable (in the sense of Lyapunov), otherwise, it is unstable. Let GL.H / denote the group of invertible bounded linear operators of H . We shall say that system (2.2.1) is reducible if there exists a (smooth) map ˆW T ! GL.H / such that the operator B.'/ in (2.2.4) is a constant-coefficient block-diagonal operator B, i.e., bj in (2.2.5) and (2.2.6) are finite-dimensional matrices, constant in time. The spectrum of each matrix bj determines the stability/instability properties of the system (2.2.1). If ! 2 R (time-periodic forcing) and the phase space H is finite dimensional, the classical Floquet theory proves that any time-periodic linear system (2.2.1) is reducible, see e.g., [62, Chapter I]. On the other hand, if ! 2 R , 2, it is known that there exist pathological nonreducible linear systems, see, e.g., [47, Chapter 1]. If A.!t/ is a small perturbation of a constant coefficient operator, perturbative algorithms for reducibility can be implemented. In the next subsection we describe this strategy in the simplest setting. This approach was systematically adopted by Moser [107] for developing finite-dimensional KAM theory (in a much more general context).
2.2.2 Perturbative reducibility In the framework of the study of quasiperiodic solutions of a PDE, (2.2.1) stands for the linearized PDE at a quasiperiodic (approximate) solution of frequency vector !.
24
2 KAM for PDEs and strategy of proof
In a Nash–Moser scheme, one has to invert the operator ! @' C A.'/;
' 2 T ;
(2.2.7)
which acts on maps U W T ! H . We now assume that A.'/ is a perturbation of a diagonal operator D acting on H and not depending on ', namely A.'/ D D C R.'/;
D D Diag.idj /j 2Z D Op.idj /;
dj 2 R ;
(2.2.8)
where the eigenvalues idj are simple. The family of operators R.'/ is acting on the phase space H and plays the role of a small perturbation. Remark 2.2.3. We suppose that dj , j 2 Z, are real, i.e. u D 0 is an elliptic equilibrium for the linear system ut C Du D 0. If some Im d j ¤ 0, then there are hyperbolic directions. These eigenvalues do not create resonance phenomena and perturbative reducibility theory is easier. For nonlinear systems, this case corresponds to the search for whiskered tori, see, e.g., [78] for finite-dimensional systems and [68] for PDEs. We look for a transformation ˆ.'/, ' 2 T , of the phase space H , as in (2.2.2) that removes from R.'/ the angles ' up to terms of size O.jRj2 /. We present below only the algebraic aspect of the reducibility scheme, without specifying the norms in the phase space H or the operator norms of ˆ.'/; R, etc. For computational purposes, it is convenient to transform the linear system (2.2.1) under the flow ˆF .'; / generated by an auxiliary linear equation @ ˆF .'; / D F .'/ˆF .'; /;
ˆF .'; 0/ D Id ;
(2.2.9)
generated by a linear operator F .'/ to be chosen (which could also be -dependent). This amounts to computing the Lie derivative of A.'/ in the direction of the vector field F .'/. Note that if F .'/ is bounded, then the flow (2.2.9) is well posed and given by eF .'/ . This is always the case for a finite-dimensional system, but it may be an issue for infinite-dimensional systems if the generator F .'/ is not bounded. Given a linear operator A0 .'/, the conjugated operator under the flow ˆF .'; / generated by (2.2.9), A.'; / WD ˆF .'; /A0.'/ˆF .'; /1 ; satisfies the Heisenberg equation ( @ A.'; / D F .'/; A.'; / A.'; /j D0 D A0 .'/
(2.2.10)
where ŒA; B WD A ı B B ı A denotes the commutator between two linear operators A; B. Then, by a Taylor expansion, using (2.2.10), we obtain the formal Lie expansion X 1 1 Adk .A0 / D A0 .'/ C AdF A0 C Ad2F A0 C (2.2.11) A.'; /j D1 D kŠ 2 k0
2.2 The reducibility approach to KAM for PDEs
25
where AdF Œ WD ŒF; . One may expect this expansion to be certainly convergent if F and A0 are bounded in suitable norms. Conjugating (2.2.7) under the flow generated by (2.2.9) we then obtain an operator of the form ! @' C D ! @' F .'/ C ŒF .'/; D C R.'/ C smaller terms :
(2.2.12)
We want to choose F .'/ in such a way to solve the “homological” equation ! @' F .'/ C R.'/ C ŒF .'/; D D ŒR where bj .0/ ; ŒR WD Diagj R j
bj .0/ WD R j
1 .2/
Z T
(2.2.13)
Rjj .'/ d' ;
(2.2.14)
is the normal form part of the operator R.'/, independent of ', that we can not eliminate. Representing the linear operators F .'/ D .Fkj .'//j;k2Z and R.'/ D j .Rk .'//j;k2Z as matrices, and computing the commutator with the diagonal operator D in (2.2.8) we obtain that (2.2.13) is represented as j
j
j
j
! @' Fk .'/ C Rk .'/ C i.dj dk /Fk .'/ D ŒRk : Performing the Fourier expansion in ', X j b .`/ei`' ; Fkj .'/ D F k
Rkj .'/ D
`2Z
X
bj .`/ei`' ; R k
`2Z
the latter equation reduces to the infinitely many scalar equations bj .`/Ci.dj dk /F bj .`/ D ŒRj ı`;0 ; b j .`/C R i! ` F k k k k
j; k 2 Z; ` 2 Z ; (2.2.15)
where ı`;0 WD 1 if ` D 0 and zero otherwise. Assuming the so-called second-order Melnikov nonresonance conditions ˇ ˇ ˇ! ` C dj dk ˇ ; 8.`; j; k/ ¤ .0; j; j / ; (2.2.16) h`i for some ; > 0, we can define the solution of the homological equations (2.2.15) (see (2.2.14)) 8 bj .`/ ˆ R < k b j .`/ WD i.! ` C d d / 8.`; j; k/ ¤ .0; j; j / (2.2.17) F j k k ˆ :0 8.`; j; k/ D .0; j; j / : Therefore the transformed operator (2.2.12) becomes ! @' C DC C smaller terms
(2.2.18)
26 where
2 KAM for PDEs and strategy of proof
DC WD D C ŒR D idj C ŒRjj j 2Z
(2.2.19)
is the new diagonal operator, constant in '. We can iterate this step to also reduce the small terms of order O.jRj2 1 / that are left in (2.2.18), and so on. Note that if R.'/ is a bounded operator and depends smoothly enough on ', then F .'/ defined in (2.2.17), with the denominators satisfying (2.2.16), is bounded as well and thus (2.2.9) certainly defines a flow by standard Banach space ODE techniques. On the other hand, the loss of time derivatives induced on F .'/ by the divisors in (2.2.16) can be recovered by a smoothing procedure in the angles ', like a truncation in Fourier space. Remark 2.2.4. If the operator A.'/ in (2.2.7) and (2.2.8) is Hamiltonian, as defined in Remark 2.2.1 (with a symplectic form J , which commutes with D), then F .'/ is Hamiltonian, its flow ˆF .'; / is symplectic, and the new operator in (2.2.18) is Hamiltonian as well. In order to continue the iteration, one also needs to impose nonresonance conditions as in (2.2.16) at each step and therefore we need information about the perturbed j normal form DC in (2.2.19), in particular the asymptotic of ŒRj . If these steps work, after an infinite iteration one could conjugate the quasiperiodic operator (2.2.7) to a constant in ', diagonal operator of the form ! @' C Diagj .idj1 /;
idj1 D idj C ŒRj C : j
(2.2.20)
At this stage, imposing the first-order Melnikov nonresonance conditions j! ` C dj1 j
; h`i
8`; j ;
the diagonal linear operator (2.2.20) is invertible with an inverse that loses timederivatives. One then verifies that all the changes of coordinates – that have been constructed iteratively to conjugate (2.2.7) to (2.2.20) – map spaces of high regularity to themselves. In conclusion, this approach enables us to prove the existence of an inverse of the initial quasiperiodic linear operator (2.2.7) that satisfies tame estimates in high norms (with loss of derivatives). This is the essence of the Newton–Nash–Moser–KAM perturbative reducibility scheme that has been used for proving KAM results for 1-dimensional NLW and NLS equations with Dirichlet boundary conditions in [100, 112, 126], as we shall describe in the next section.
2.3 Reducibility results
27
The following questions arise naturally: 1. What happens if the eigenvalues dj are multiple? This is the common situation for 1-dimensional PDEs with periodic boundary conditions or in higher space dimensions. In such a case, it is conceivable to reduce ! @' C A.'/ to a blockj are finitediagonal normal form linear system of the form (2.2.20), where d1 dimensional matrices. 2. What happens if the operator R.'/ in (2.2.8) is unbounded? This is the common situation for PDEs with nonlinearities that contain derivatives. In such a case, the operator F .'/ defined in (2.2.17) is also unbounded and therefore (2.2.9) might not define a flow. 3. What happens if, instead of the Melnikov nonresonance conditions (2.2.16), we assume only j! ` C dj dk j
h`i hj id hkid
;
8.`; j; k/ ¤ .0; j; j / ;
(2.2.21)
for some d > 0, which induce a loss of space derivatives? This is the common situation when the dispersion relation dj j ˛ , ˛ < 1, has a sublinear growth. Also in this case, the operator F .'/ defined in (2.2.17) would be unbounded. This situation appears, for example, for pure gravity water-waves equations. We describe below some answers to the above questions. Remark 2.2.5. Another natural question is what happens if the number of frequencies is C1, namely for almost-periodic solutions. Only a few results are available so far, all for PDEs in 1 space dimension. The first result was obtained by P¨oschel [114] for a regularizing Schr¨odinger equation and almost-periodic solutions with a very fast decay in Fourier space. Later, Bourgain [41] proved the existence of almostperiodic solutions with exponential Fourier decay for a semilinear Schr¨odinger equation with a convolution potential. In both papers, the potential is regarded as an infinite-dimensional parameter.
2.3 Reducibility results We present in this section the main results about KAM theory for PDEs based on the reducibility scheme described in the previous section.
2.3.1 KAM for 1-dimensional NLW and NLS equations with Dirichlet boundary conditions The iterative reducibility scheme outlined in Section 2.2.2 has been effectively implemented by Kuksin [100] and Wayne [126] for proving the existence of quasiperiodic
28
2 KAM for PDEs and strategy of proof
solutions of 1-dimensional semilinear wave yt t yxx C V .x/y C "f .x; y/ D 0;
y.0/ D y./ D 0 ;
(2.3.1)
u.0/ D u./ D 0 ;
(2.3.2)
and Schr¨odinger equations iut uxx C V .x/u C "f juj2 u D 0;
with Dirichlet boundary conditions. These equations are regarded as a perturbation of the linear PDEs yt t yxx C V .x/y D 0;
iut uxx C V .x/u D 0 ;
(2.3.3)
which depend on the potential V .x/ used as a parameter. The linearized operators obtained at an approximate quasiperiodic solution are, for the NLS equation, h 7! i! @' h hxx C V .x/h C "q.'; x/h C "p.'; x/hN
(2.3.4)
with q.'; x/ 2 R, p.'; x/ 2 C, and, for the NLW equation, y 7! .! @' /2 y yxx C V .x/y C "a.'; x/y ;
(2.3.5)
with a.'; x/ 2 R, that, in the complex variable 1
1
h D DV2 y C iDV 2 yt ;
DV WD
p
C V .x/ ;
assumes the form " 1 1 N : h 7! ! @' h C iDV h C i DV 2 a.'; x/DV 2 .h C h/ 2
(2.3.6)
Coupling these equations with their complex conjugated component, we have to in h vert the quasiperiodic operators, acting on N , given, for the NLS equation, by h
iq.'; x/ ip.'; x/ 0 i@xx iV .x/ " (2.3.7) ! @' C ip.'; N x/ iq.'; x/ 0 i@xx C iV .x/ and, for the NLW equation,
" 12 a.'; x/ 1 0 a.'; x/ iDV ! @' C C i DV DV 2 : 0 iDV a.'; x/ a.'; x/ 2
(2.3.8)
These linear operators have the form (2.2.7) and (2.2.8) with an operator R.'/ that is bounded. Actually, note that for the NLW equation the perturbative term in (2.3.8) is also 1-smoothing. Moreover the eigenvalues 2j , j 2 Nnf0g, of the Sturm– Liouville operator @xx C V .x/ with Dirichlet boundary conditions are simple and
2.3 Reducibility results
29
the quasiperiodic operators (2.3.7) and (2.3.8) take the form (2.2.7) and (2.2.8), where the eigenvalues of D are ˙i2j ; 2j j 2 ; for the NLS equation;
˙i¯j ; ¯j j; for the NLW equation:
Then, it is not hard to impose second-order Melnikov nonresonance conditions as in (2.2.16). In view of these observations, it is possible to implement the KAM reducibility scheme presented above to prove the existence of quasiperiodic solutions for (2.3.1) and (2.3.2) (actually the KAM iteration in [100, 126] is a bit different but the previous argument catches its essence). Later, these results were extended in Kuksin and P¨oschel [103] to parameterindependent Schr¨odinger equations ( iut D uxx C f juj2 u ; (2.3.9) where f .0/ D 0; f 0 .0/ ¤ 0 ; u.0/ D u./ D 0 ; and in P¨oschel [113] to nonlinear Klein–Gordon equations yt t yxx C my D y 3 C h:o:t;
y.0/ D y./ D 0 :
(2.3.10)
The main new difficulty of these equations is that the linear equations iut D uxx ;
yt t yxx C my D 0 ;
have resonant invariant tori. Actually, all the solutions of the first equation, X 2 uj .0/eij t eijx ; (2.3.11) u.t; x/ D j 2Z
are 2-periodic in time (for this reason (2.3.9) p is called a completely resonant PDE) and the Klein–Gordon linear frequencies j 2 C m may be resonant for several values of the mass m. The new key idea in [113] and [103] is to compute precisely how the nonlinearity in (2.3.9) and (2.3.10) modulates the tangential and normal frequencies of the expected quasiperiodic solutions. In particular, a Birkhoff normal form analysis enables us to prove that the tangential frequencies vary diffeomorphically with the amplitudes of the solutions. This nondegeneracy property allows us then to prove that the Melnikov nonresonance conditions are satisfied for most amplitudes. We note, however, the following difficulty for the equations (2.3.9) and (2.3.10) that is not present for (2.3.1) and (2.3.2): the frequency vector ! may satisfy only a Diophantine condition (1.2.6) with a constant 0 that tends to 0 as the solution tends to 0 (the linear frequencies in (2.3.11) are integers), and similarly for the second-order Melnikov conditions (2.2.16). Note that the remainders in (2.2.18) have size O.jRj2 1 / and, as a consequence, careful estimates have to be performed to overcome this “singular” perturbation issue.
30
2 KAM for PDEs and strategy of proof
2.3.2 KAM for 1-dimensional NLW and NLS equations with periodic boundary conditions The above results do not apply for periodic boundary conditions x 2 T because two eigenvalues of @xx C V .x/ coincide (or are too close) and thus the second-order Melnikov nonresonance conditions (2.2.16) are violated. This is the first instance where the difficulty mentioned in item 1 appears. Historically, this difficulty was first solved by Craig and Wayne [54] and Bourgain [36] by developing a multiscale approach based on repeated use of the resolvent identity in the spirit of the work [69] by Fr¨olich and Spencer for Anderson localization. This approach does not require the second-order Melnikov nonresonance conditions. We describe it in Section 2.4. Developments of this multiscale approach are the basis of the present monograph. The KAM reducibility approach was extended later by Chierchia and You [48] for semilinear wave equations like (2.3.1) with periodic boundary conditions. Because of the near resonance between pairs of frequencies, the linearized operators (2.3.5) and (2.3.6) are reduced to a diagonal system of 2 2 self-adjoint matrices, namely of the form (2.2.20) with dj1 2 Mat.2 2I C/, by requiring at each step second-order Melnikov nonresonance conditions of the form ; 8.`; j; k/ ¤ .0; j; ˙j / : (2.3.12) j! ` C dj dk j h`i Note that we do not require in (2.3.12) nonresonance conditions for ` D 0 and k D ˙j . Since NLW equations are second-order equations, the nonlinear perturbative part of its Hamiltonian vector field is regularizing of order 1 (it gains one space derivative), see (2.3.8), and this is sufficient to prove that the perturbed frequencies of (2.3.8) satisfy an asymptotic estimate like j ."/ D j C O "jj j1 D jj j C O jj j1 as jj j ! C1, where 2j denote the eigenvalues of the Sturm–Liouville operator @xx C V .x/. Thanks to this asymptotic expansion it is sufficient to impose, for each ` 2 Z , only finitely many second-order Melnikov nonresonance conditions as (2.3.12), by requiring first-order Melnikov conditions like j! ` C hj h`i ;
(2.3.13)
for all .`; h/ 2 .Z Z/ n .0; 0/. Indeed, if jj j; jkj > C j`j 1 , for an appropriate constant C > 0, we get j! ` C j ."/ k ."/j
(2.3.13)
j! ` C jj j jkjj O.1= min.jkj; jj j// h`i O 1= min.jkj; jj j/ h`i 2
(2.3.14)
noting that jj j jkj is an integer. Moreover if jjkj jj jj C j`j for another appropriate constant C > 0 then j! ` C j ."/ k ."/j j`j. Hence, under (2.3.13), for
2.3 Reducibility results
31
` given, the second-order Melnikov conditions with time-index ` are automatically satisfied for all .j; k/ except a finite number. Remark 2.3.1. If the Hamiltonian nonlinearity does not depend on the space variable x, the equations (2.3.9)–(2.3.10) are invariant under space translations and therefore possess a prime integral by the Noether Theorem. Geng and You [71,73] were the first to exploit such conservation law, which is preserved along the KAM iteration, to fulfill the nonresonance conditions. The main observation is that such symmetry enables to prove that many monomials are a priori never present along the KAM iteration. In particular, this symmetry removes the degeneracy produced by the multiple normal frequencies. For semilinear Schr¨odinger equations like (2.3.2) and (2.3.9), the nonlinear vector field is not smoothing. Correspondingly, note that in the linearized operator (2.3.7) the remainder R.'/ is a matrix of multiplication operators. In such a case, the basic perturbative estimate for the eigenvalues gives j ."/ D 2j C O."/ ; which is not sufficient to verify second-order Melnikov nonresonance conditions like (2.3.12), in particular j! ` C j ."/ j ."/j
; h`i
8` 2 Z n f0g; j 2 N ;
for most values of the parameters. The first KAM reducibility result for NLS equations with x 2 T was proved by Eliasson and Kuksin in [61] as a particular case of a much more general result valid for tori Td of any space dimension d 1, which we discuss below. The key point is to extract, using the notion of Toeplitz–Lipschitz matrices, the first-order asymptotic expansion of the perturbed eigenvalues. For perturbations of one-dimensional Schr¨odinger equations, another recent approach to obtain the improved asymptotics of the perturbed frequencies, i.e., j ."/ D j 2 C c C O."=jj j/ ; for some constant c independent of j , is developed in Berti, Kappeler, and Montalto [29]. This expansion, which allows us to verify the second-order Melnikov nonresonance conditions (2.3.12), is obtained via a regularization technique based on pseudodifferential ideas, that we explain below. The approach in [29] applies to semilinear perturbations (also x-dependent) of any large finite gap solution of iut D @xx u C juj2 u C "f .x; u/;
x 2 T:
Let us explain the term finite gap solutions. The 1-dimensional-cubic NLS equation iut D @xx u C juj2 u;
x 2 T;
(2.3.15)
32
2 KAM for PDEs and strategy of proof
possesses global analytic action-angle variables in the form of Birkhoff coordinates, see [79], and the whole infinite-dimensional phase space is foliated by quasiperiodic – called finite gap solutions – and almost-periodic solutions. The Birkhoff coordinates are a Cartesian smooth version of the action-angle variables to avoid the singularity when one action component vanishes, i.e., close to the elliptic equilibrium u D 0. This situation generalizes what happens for a finite-dimensional Hamiltonian system in R2n that possesses n-independent prime integrals in involution. According to the celebrated Liouville–Arnold theorem (see, e.g., [4]), in suitable local symplectic angle-action variables .; I / 2 Tn Rn , the integrable Hamiltonian H.I / depends only on the actions and the dynamics are described by P D @I H.I /;
IP D 0 :
Thus the phase space is foliated by the invariant tori Tn fg, 2 Rn , filled by the quasiperiodic solutions .t/ D 0 C !./t, I.t/ D , with frequency vector !./ D .@I H /./. The analogous construction close to an elliptic equilibrium, where the action-angle variables become singular, is provided by the R¨ussmann–Vey–Ito theorem [90, 120, 122]. We refer to [97] for an introduction. Other integrable PDEs that possess Birkhoff coordinates are KdV [97] and mKdV [98], see Appendix A.4. Remark 2.3.2. The Birkhoff normal form construction of [103] discussed for the NLS equation (2.3.9) provides, close to u D 0, an approximation of the global Birkhoff coordinates of the 1-dimensional-cubic NLS equation. This is sufficient information to prove a KAM result for small-amplitude solutions.
2.3.3 Space multidimensional PDEs For space multidimensional PDEs, the reducibility approach was first worked out for semilinear Schr¨odinger equations N iut D u C V u C "@uN F .x; u; u/;
x 2 Td ;
(2.3.16)
with a convolution potential by Eliasson and Kuksin [60, 61]. This is a much more difficult situation with respect to the 1-dimensional case because the eigenvalues of C V .x/ appear in clusters of unbounded size. This is the difficulty mentioned in item 1. In such a case, the reducibility result that one could look for is to block diagonalize the quasiperiodic Schr¨odinger linear operator h 7! i! @' h h C V h C "q.'; x/h C "p.'; x/hN ; i.e., to obtain an operator as (2.2.20) with finite-dimensional blocks dj1 , which are self-adjoint matrices of increasing dimension as j ! C1. The convolution potential V plays the role of external parameters. Eliasson and Kuksin introduced in [61] the notion of Toeplitz–Lipschitz matrices in order to extract asymptotic information on the eigenvalues, and so verify the second-order Melnikov nonresonance conditions. The quasiperiodic solutions of (2.3.16) obtained in [61] are linearly stable.
2.3 Reducibility results
33
Remark 2.3.3. The reducibility techniques in [61] enable us to prove a stability result for all the solutions of the linear Schr¨odinger equation iut D u C "V .!t; x/;
x 2 Td ;
with a small quasiperiodic analytic potential V .!t; x/. For all frequencies ! 2 R , except a set of measure tending to 0 as " ! 0, the Sobolev norms of any solution u.t; / satisfy ku.t; /kH s .Td / ku.0; /kH s .Td / ;
8t 2 R :
Subsequently for the cubic NLS equation iut D u C juj2 u;
x 2 T2 ;
(2.3.17)
which is parameter-independent and completely resonant, Geng, Xu, and You [75] proved a KAM result, without reducibility, using a Birkhoff normal form analysis. We remark that the Birkhoff normal form of (2.3.17) is not integrable, unlike in space dimension d D 1, causing additional difficulties with respect to [103]. For completely resonant NLS equations in any space dimension and a polynomial nonlinearity (2.3.18) iut D u C juj2p u; p 2 N; x 2 Td ; Procesi and Procesi [116] implemented a systematic study of the resonant Birkhoff normal form and, using the notion of quasi-Toeplitz matrices developed in Procesi and Xu [119], proved in [117] and [118], the existence of reducible quasiperiodic solutions of (2.3.18). Remark 2.3.4. The resonant Birkhoff normal form of (2.3.17) is exploited in [49] and [84], to construct chaotic orbits with a growth of the Sobolev norm. This “norm inflation” phenomenon is reminiscent of the Arnold diffusion problem [3] for finitedimensional Hamiltonian systems. Similar results for the NLS equation (2.3.18) have been proved in [83]. KAM results have been proved for parameter dependent beam equations by Geng and You [72], Procesi [115], and, more recently, in Eliasson, Gr´ebert, and Kuksin [59] for multidimensional beam equations like ut t C 2 u C mu C @u G.x; u/ D 0;
x 2 Td ;
u 2 R:
We also mention the work of Gr´ebert and Paturel [80] concerning the existence of reducible quasiperiodic solutions of Klein–Gordon equations on the sphere Sd , ut t u C mu C ıM u C "g.x; u/ D 0;
x 2 Sd ;
u 2 R;
where is the Laplace–Beltrami operator and M is a Fourier multiplier.
(2.3.19)
34
2 KAM for PDEs and strategy of proof
On the other hand, if x 2 Td , the infinitely many second-order Melnikov conditions required to block diagonalize the quasiperiodic linear wave operator .! @' /2 C V C"a.'; x/;
x 2 Td ;
are violated for d 2, and no reducibility results are available so far. Nevertheless, results of “almost” reducibility have been reported in Eliasson [57] and Eliasson, Gr´ebert, and Kuksin [58]. Before concluding this subsection, we also mention the KAM result by Gr´ebert– Thomann [82] for smoothing nonlinear perturbations of the 1-dimensional harmonic oscillator and Gr´ebert and Paturel [81] in higher space dimensions.
2.3.4 1-dimensional quasi- and fully nonlinear PDEs, water waves Another situation where the reducibility approach that we described in Subsection 2.2.2 encounters a serious difficulty is when the nondiagonal remainder R.'/ is unbounded. This is the difficulty mentioned in item 2. In such a case, the auxiliary vector field F .'/ defined in (2.2.17) is unbounded as well. Therefore it may not define a flow and the iterative reducibility scheme described in Subsection 2.2.2 would formally produce remainders that accumulate more and more derivatives. KAM for semilinear PDEs with derivatives The first KAM results for PDEs with an unbounded nonlinearity were proved by Kuksin [101], and later by Kappeler and P¨oschel [97], for perturbations of finite-gap solutions of ut C uxxx C @x u2 C "@x .@u f /.x; u/ D 0;
x 2 T:
(2.3.20)
The corresponding quasiperiodic linearized operator at an approximate quasiperiodic solution u has the form h 7! ! @' h C @xxx h C @x .2uh/ C "@x .ah/;
a WD .@uu f /.x; u/ :
The key idea in [101] is to exploit the fact that the frequencies of KdV grow asymptotically as j 3 as j ! C1. Therefore one can impose second-order Melnikov nonresonance conditions like ˇ ˇ ˇ! ` C j 3 i 3 ˇ j 2 C i 2 =h`i ; i ¤ j ; which make gain two space derivatives (outside the diagonal i D j ), sufficient to compensate the loss of one space derivative produced by the vector field "@x .@u f /.x; u/. On the diagonal ` ¤ 0; i D j , one does not solve the homological equations with the consequence that the KAM normal form is '-dependent. In order to overcome this difficulty one can make use of the so-called Kuksin Lemma to invert the corresponding quasiperiodic scalar operator. Subsequently, developing an improved version of the Kuksin Lemma, Liu and Yuan in [93] proved KAM results for semilinear perturbations of Hamiltonian derivative NLS and Benjamin–Ono equations and Zhang,
2.3 Reducibility results
35
Gao, and Yuan [130] for the reversible derivative NLS equation iut C uxx D jux j2 u;
u.0/ D u./ D 0 :
These PDEs are more difficult than KdV because the linear frequencies grow like
j 2 and not j 3 , and therefore one gains only one space derivative when solving the homological equations. These methods do not apply for derivative wave equations where the dispersion relation is asymptotically linear. Such a case has been addressed more recently by Berti, Biasco, and Procesi [18, 19], who proved the existence and the stability of quasiperiodic solutions of autonomous derivative Klein–Gordon equations yt t yxx C my D g.x; y; yx ; yt /
(2.3.21)
satisfying reversibility conditions. These assumptions rule out nonlinearities like yt3 , yx3 , for which no periodic nor quasiperiodic solutions exist (with these nonlinearities all the solutions dissipate to zero). The key point in [18,19] was to adapt the notion of a quasi-Toeplitz vector field introduced in [119] to obtain the higher-order asymptotic expansion of the perturbed normal frequencies p j ."/ D j 2 C m C a˙ C O.1=j /; as j ! ˙1 ; for suitable constants a˙ (of the size a˙ D O."/ of the solution y D O."/). Thanks to this asymptotic expansion, it is sufficient to verify, for each ` 2 Z , that only finitely many second-order Melnikov nonresonance conditions hold. Indeed, using an argument as in (2.3.14), infinitely many conditions in (2.3.12) are already verified by imposing only first-order Melnikov conditions like j! ` C hj h`i ;
j! ` C .aC a / C hj h`i ;
for all .`; h/ 2 Z Z such that ! ` C h and ! ` C .aC a / C h do not vanish identically. KAM for quasilinear and fully nonlinear PDEs All the above results still concern semilinear perturbations, namely when the number of derivatives that are present in the nonlinearity is strictly lower than the order of the linear differential operator. The first existence results of small-amplitude quasiperiodic solutions for quasilinear PDEs were proved by Baldi, Berti, and Montalto in [6] for fully nonlinear perturbations of the Airy equation ut C uxxx C "f .!t; x; u; ux ; uxx ; uxxx / D 0;
x 2 T;
(2.3.22)
and in [8] for quasilinear autonomous perturbed KdV equations ut C uxxx C @x u2 C N .x; u; ux ; uxx ; uxxx / D 0 ;
(2.3.23)
36
2 KAM for PDEs and strategy of proof
where the Hamiltonian nonlinearity N .x; u; ux ; uxx ; uxxx / WD @x .@u f /.x; u; ux /@x ..@ux f /.x; u; ux // (2.3.24) vanishes at the origin as O.u4 /. See [77] when N vanishes only quadratically. The main new tool that has been introduced to solve this problem is a systematic use of pseudodifferential calculus. The key point is to reduce to constant coefficients the linear PDE ut C.1Ca3.!t; x//uxxx Ca2 .!t; x/uxx Ca1 .!t; x/ux Ca0 .!t; x/u D 0 ; (2.3.25) which is obtained by linearizing (2.3.22) at an approximate quasiperiodic solution u.!t; x/. The coefficients ai .!t; x/, i D 0; : : : ; 3 have size O."/. Instead of trying to diminish the size of the .t; x/-dependent terms in (2.3.25), as in the scheme outlined in subsection 2.2.2 – the big difficulty 2 would appear – the aim is to conjugate (2.3.25) to a system like ut C m3 uxxx C m1 ux C R0 .!t/u D 0 ;
(2.3.26)
where m3 D 1CO."/, m1 D O."/ are constants and R0 .!t/ is a zero-order operator, still time dependent. To do this, (2.3.25) is conjugated with a time-quasiperiodic change of variable (as in (2.2.2)) u D ˆ.!t/Œv D v.t; x C ˇ.!t; x// ;
(2.3.27)
induced by the composition with a diffeomorphism x 7! x Cˇ.'; x/ of Tx (requiring jˇx j < 1). The conjugated system (2.2.3) and (2.2.4) is vt C ˆ1 .!t/ .1 C a3 .!t; x//.1 C ˇx .!t; x//3 vxxx .t; x/ C lower-order operators D 0 and therefore one chooses a periodic function ˇ.'; x/ such that .1 C a3 .'; x//.1 C ˇx .'; x//3 D m3 .'/ is independent of x. Since ˇx .'; x/ has zero space average, this is possible with !3 Z 1 dx : m3 .'/ D 2 T .1 C a3 .'; x// 31 The ' dependence of m3 .'/ can also be eliminated at the highest order using a quasiperiodic reparametrization of time and, using other pseudodifferential transformations, we can also reduce the lower-order terms to constant coefficients obtaining (2.3.26). The reduction (2.3.26) implies the accurate asymptotic expansion of the perturbed frequencies j ."/ D im3 j 3 C im1 j C O."/ :
2.3 Reducibility results
37
Now it is possible to verify the second-order Melnikov nonresonance conditions required by a KAM reducibility scheme (as outlined in Subsection 2.2.2) to diagonalize R0 .!t/, completing the reduction of (2.3.26), thus (2.3.25). These techniques were then employed by Feola and Procesi [65] for quasilinear forced perturbations of Schr¨odinger equations and in [50] and [66] for the search for small-amplitude analytic solutions of autonomous PDEs. These kind of ideas have also been successfully generalized for unbounded perturbations of harmonic oscillators by Bambusi [10, 11] and Bambusi and Montalto [14]. Remark 2.3.5. The recent paper by Berti, Kappeler, and Montalto [30] also proves the persistence of large finite gap solutions of the perturbed quasilinear KdV equation (2.3.23). This answers positively a longstanding question concerning the persistence of quasiperiodic solutions, with arbitrary size, of integrable PDEs subject to strongly nonlinear perturbations. The KdV and the NLS equations are purely differential equations and the pseudodifferential tools required are essentially commutators of multiplication operators and Fourier multipliers. On the other hand, for the water-waves equations that we now present, the theory of pseudodifferential operators has to be used in full strength. Water-waves equations The water-waves equations for a perfect, incompressible, inviscid, irrotational fluid occupying the time-dependent region ˚ D WD .x; y/ 2 T RW h < y < .t; x/ ; T WD Tx WD R=2Z ; (2.3.28) under the action of gravity, and possible capillary forces at the free surface, are the Euler equations of hydrodynamics combined with conditions at the boundary of the fluid: 8 ! ˆ 1 x ˆ ˆ at y D .t; x/ @t ˆC jrˆj2 C g D @x p ˆ ˆ ˆ 2 1 C 2x < (2.3.29) ˆ D 0 in D ˆ ˆ ˆ ˆ @y ˆ D 0 at y D h ˆ ˆ : @t D @y ˆ @x @x ˆ at y D .t; x/ where g is the acceleration of gravity and is the surface tension coefficient. The unknowns of the problem (2.3.29) are the free surface y D .t; x/ and the velocity potential ˆW D ! R, i.e., the irrotational velocity field of the fluid v D rx;y ˆ. The first equation in (2.3.29) is the Bernoulli condition according to which the jump of pressure across the free surface is proportional to the mean curvature. The second equation in (2.3.29) is the incompressibility property div v D 0. The third equation expresses the impermeability of the bottom of the ocean. The last condition in (2.3.29) means that the fluid particles on the free surface y D .x; t/ remain forever on it along the fluid evolution.
38
2 KAM for PDEs and strategy of proof
Following Zakharov [127] and Craig and Sulem [53], the evolution problem (2.3.29) may be written as an infinite-dimensional Hamiltonian system in the unknowns . .t; x/; .t; x//, where .t; x/ D ˆ.t; x; .t; x// is, at each instant t, the trace at the free boundary of the velocity potential. Given .t; x/ and .t; x/ there is a unique solution ˆ.t; x; y/ of the elliptic problem 8 in fh < y < .t; x/g < ˆ D 0 @y ˆ D 0 on y D h : ˆD on fy D .t; x/g : System (2.3.29) is then equivalent to the Zakharov–Craig–Sulem system 8 @ D G. / ˆ < t ! 2 (2.3.30) 1 2 x x ˆ :@t D g 2 C 2.1 C 2 / G. / C x x C @x p 2 1 C x x where G. / WD G. I h/ is the Dirichlet–Neumann operator defined by G. / WD ˆy x ˆx jyD .t;x/ :
(2.3.31)
Note that G. / maps the Dirichlet datum to the (normalized) normal derivative of ˆ at the top boundary. The operator G. / is linear in , self-adjoint with respect to the L2 scalar product, positive-semidefinite, and its kernel contains only the constant functions. The Dirichlet–Neumann operator depends smoothly on the wave profile , and it is a pseudodifferential operator with principal symbol D tanh.hD/. Furthermore the equations (2.3.30) are the Hamiltonian system @t D r H. ; /;
@t
D r H. ; / ;
where r denotes the L2 gradient and the Hamiltonian Z q Z Z g 1 2 G. ; h/ dx C dx C 1 C 2x dx H. ; / D 2 T 2 T T
(2.3.32)
(2.3.33)
is the sum of the kinetic, potential, and capillary energies expressed in terms of the variables . ; /. The water-waves system (2.3.30)–(2.3.32) exhibits several symmetries. First of all, the mass Z .x/ dx T
is a first integral of (2.3.30). Moreover (2.3.30) is invariant under spatial translations and Noether’s theorem implies that the momentum Z x .x/ .x/ dx T
2.3 Reducibility results
39
is a prime integral of (2.3.32). In addition, the subspace of functions that are even in x, .x/ D .x/; .x/ D .x/ ; (2.3.34) is invariant under (2.3.30). In this case, the velocity potential ˆ.x; y/ is also even and 2-periodic in x and so the x component of the velocity field v D .ˆx ; ˆy / vanishes at x D k, for all k 2 Z. Hence there is no flow of fluid through the lines x D k, k 2 Z, and a solution of (2.3.30) satisfying (2.3.34) describes the motion of a liquid confined between two vertical walls. We also note that the water-waves system (2.3.30)–(2.3.32) is reversible with respect to the involution S W . ; / 7! . ; /, i.e., the Hamiltonian H in (2.3.33) is even in , see Appendix A. As a consequence, it is natural to look for solutions of (2.3.30) satisfying u.t/ D S u.t/; i.e.; .t; x/ D .t; x/; .t; x/ D .t; x/
8t; x 2 R :
(2.3.35)
Solutions of the water-waves equations (2.3.30) satisfying (2.3.34) and (2.3.35) are called standing water waves. The phase space of (2.3.30) containing . ; / is (a dense subspace) of H01 .T/ 1 P H .T/; H01 .T/ denotes the subspace of H 1 .T/ of zero average functions; HP 1 .T/ WD H 1 .T/= is the quotient space of H 1 .T/ by the equivalence relation defined in this way: 1 2 if and only if 1 .x/ 2 .x/ D c is a constant. For simplicity of notation we denote the equivalence class Œ by . Note that the second equation in (2.3.30) is in HP 1 .T/, as it is natural because only the gradient of the velocity potential has a physical meaning. Linearizing (2.3.30) at the equilibrium . ; / D .0; 0/ we get @t D G.0/ ; (2.3.36) @t D g C xx where G.0/ D D tanh.hD/ is the Dirichlet–Neumann operator at the flat surface D 0. The linear frequencies of oscillations of (2.3.36) are p (2.3.37) !j D j tanh.hj /.g C j 2 /; j 2 Z ; which, in the phase space of even functions (2.3.34), are simple. Also note that > 0 .capillary gravity waves/
H)
!j jj j3=2 as jj j ! 1 ;
D 0 .pure gravity waves/
H)
!j jj j1=2 as jj j ! 1 :
40
2 KAM for PDEs and strategy of proof
KAM for water waves. The first existence results of small-amplitude time-periodic gravity standing wave solutions for bidimensional fluids has been proved by Plotinkov and Toland [110] in finite depth and by Iooss, Plotnikov, and Toland in [91] in infinite depth. More recently, the existence of time-periodic gravity-capillary standing wave solutions in infinite depth have been proved by Alazard and Baldi [1]. The main result in [32] proves that most of the standing wave solutions of the linear system (2.3.36), which are supported on finitely many space Fourier modes, can be continued to standing wave solutions of the nonlinear water-waves system (2.3.30) for most values of the surface tension parameter 2 Œ1 ; 2 . A key step is the reducibility to constant coefficients of the quasiperiodic operator L! obtained by linearizing (2.3.30) at a quasiperiodic approximate solution. After the introduction of a linearized Alinhac good unknown, and using in a systematic way pseudodifferential calculus, it is possible to transform L! into a complex quasiperiodic linear operator of the form 1 1 1 h; hN 7! ! @' C im3 jDj 2 .1 @xx / 2 C im1 jDj 2 h C R.'/ h; hN ; (2.3.38) where m3 ; m1 2 R are constants satisfying m3 1, m1 0 and the remainder R.'/ is a small bounded operator. Then, a KAM reducibility scheme completes the diagonalization of the linearized operator L! . The required second-order Melnikov nonresonance conditions are fulfilled for most values of the surface tension parameter generalizing ideas of degenerate KAM theory for PDEs [12]. One exploits that the linear frequencies !j in (2.3.37) are analytic and nondegenerate in , and the sharp asymptotic expansion of the perturbed frequencies obtained by the regularization procedure. In the case of pure gravity water waves, i.e., D 0, the linear frequencies of oscillation are (see (2.3.37)) p !j WD !j .h/ WD gj tanh.hj /; j 1 ; (2.3.39) and three major further difficulties in proving the existence of time-quasiperiodic solutions are: (i) The nonlinear water-waves system (2.3.30) (with D 0) is a singular perturbation of (2.3.36) (with D 0) in the sense that the quasiperiodic linearized operator assumes the form 1
1
! @' C ijDj 2 tanh 2 .hjDj/ C V .'; x/@x and the term V .'; x/@x is now a singular perturbation of the linear dispersion 1 1 relation operator ijDj 2 tanh 2 .hjDj/ (on the contrary, for the gravity-capillary 3 case, the transport term V .'; x/@x is a lower order perturbation of jDj 2 , see (2.3.38)). p (ii) The dispersion relation (2.3.39) is sublinear, i.e. !j j for j ! 1, and therefore it is only possible to impose second-order Melnikov nonresonance conditions as in (2.2.21), which lose space derivatives.
2.4 The multiscale approach to KAM for PDEs
41
(iii) The linear frequencies !j .h/ in (2.3.39) vary with h of just exponentially small quantities. The main result in Baldi, Berti, Haus, and Montalto [5] proves the existence of pure gravity standing water-waves solutions. The difficulty (i) is solved proving a straightening theorem for a quasiperiodic transport operator: there is a quasiperiodic change of variables of the form x 7! x C ˇ.'; x/ that conjugates ! @' C V .'; x/@x to the constant coefficient vector field ! @' , for V .'; x/ small. This perturbative rectification result is a classical small divisor problem, solved for perturbations of a Diophantine vector field at the beginning of KAM theory, see, e.g., [128, 129]. Note that, despite the fact that ! 2 R is Diophantine, the constant vector field ! @' is resonant on the higher-dimensional torus T' Tx . We exploit in a crucial way the symmetry induced by the reversible structure of the water-waves equations, i.e., V .'; x/ is odd in '. This problem amounts to proving that all the solutions of the quasiperiodically time-dependent scalar characteristic equation xP D V .!t; x/, x 2 T, are quasiperiodic in time with frequency !. The difficulty (ii) is overcome by performing a regularizing procedure that conjugates the linearized operator, obtained along the Nash–Moser iteration, to a diagonal, constant coefficient linear one, up to a sufficiently smoothing operator. In this way, the subsequent KAM reducibility scheme also converges in presence of very weak Melnikov nonresonance conditions as in (2.2.21), which make lose space derivatives. This regularization strategy is in principle applicable to a broad class of PDEs where the second-order Melnikov nonresonance conditions lose space derivatives. The difficulty (iii) is solved by an improvement of the degenerate KAM theory for PDEs in Bambusi, Berti, and Magistrelli [12], which allows us to prove that all the Melnikov nonresonance conditions are fulfilled for most values of h. Remark 2.3.6. We can introduce the space wavelength 2& as an internal free parameter in the water-waves equations (2.3.30). Rescaling properly the time, space, and amplitude of the solution . .t; x/; .t; x// we obtain system (2.3.30), where the gravity constant g D 1 and the depth parameter h depends linearly on & . In this way [5] proves the existence results for a fixed equation, i.e., a fixed depth h, for most values of the space wavelength 2& .
2.4 The multiscale approach to KAM for PDEs We now present some key ideas about another approach developed for analyzing linear quasiperiodic systems, in order to prove KAM results for PDEs, starting with the seminal paper of Craig and Wayne [54] that was strongly expanded by Bourgain [36, 37, 39–42]. This set of ideas and techniques – referred as multiscale analysis – is
42
2 KAM for PDEs and strategy of proof
the basis of the present monograph. For this reason we find it convenient to present it in some detail. For definiteness we consider a quasiperiodic linear wave operator .! @' /2 C m C "b.'; x/;
' 2 T ; x 2 Td ;
(2.4.1)
acting on a dense subspace of L2 .T Td /, where m > 0 and b.'; x/ is a smooth function. A linear operator as in (2.4.1) is obtained by linearizing a NLW equation at a smooth approximate quasiperiodic solution. Remark 2.4.1. The choice of the parameters in (2.4.1) makes a significant difference. If (2.4.1) arises by linearizing a quasiperiodically forced NLW equation as (1.1.8), the frequency vector ! can be regarded as a free parameter belonging to a subset of R , independent of ". On the other hand, if (2.4.1) arises by linearizing an autonomous NLW equation like (1.1.1), or (1.1.9), the frequency ! and " are linked. In particular ! has to be "2 -close to some frequency vector ..jji j2 C m/1=2 /i D1;:::; (actually ! has to belong to the region of admissible frequencies as in (1.2.23)). This difficulty is presented in this monograph as well as in [125]. In the exponential basis fei.`'Cj x/ g`2Z ;j 2Zd the linear operator (2.4.1) is represented by the self-adjoint matrix A WD D C "T; D WD Diag.`;j /2Z Zd .! `/2 C jj j2 C m ; 0 0 ` ;j T D T`;j WD b b ``0 ;j j 0 ; (2.4.2) where b b ``0 ;j j 0 are the Fourier coefficients of the function b.'; x/; these coefficients decay rapidly to zero as j.` `0 ; j j 0 /j ! C1, exponentially fast, if b.'; x/ is analytic, polynomially, if b.'; x/ is a Sobolev function. Note that the matrix T is Toeplitz, namely it has constant entries on the diagonals .` `0 ; j j 0 / D .L; J / 2 Z Zd . Remark 2.4.2. The analytic/Gevrey setting has been considered in [36–42] and the Sobolev case in [22–27], as well as in Chapter 5 of the present monograph. The Sobolev regularity of b.'; x/ has to be large enough, as we comment in Remark 2.4.6. In KAM for PDE applications this requires that the nonlinearity (thus the solution) is sufficiently many times differentiable. For finite-dimensional Hamiltonian systems, it has been rigorously proved that, otherwise, all the invariant tori could be destroyed and only discontinuous Aubry–Mather invariant sets survive, see, e.g., [87]. The infinite set of the eigenvalues of the diagonal operator D, .! `/2 C jj j2 C m;
` 2 Z ; j 2 Zd ;
accumulate to zero (small divisors) and therefore the matrix T in (2.4.2), which represents in Fourier space the multiplication operator for the function b.'; x/, is a singular perturbation of D. As a consequence it is not obvious at all that the self-adjoint
2.4 The multiscale approach to KAM for PDEs
43
operator D C "T still has a pure point spectrum with a basis of eigenfunctions with exponential/polynomial decay, for " small, for most values of the frequency vector ! 2 R . This is the main problem addressed in Anderson-localization theory. Actually, for the convergence of a Nash–Moser scheme in applications to KAM for PDEs (cf. (2.1.5) and Remark 2.1.2), it is sufficient to prove the following: 1. For any large N , the finite-dimensional restrictions of the operator in (2.4.1), LN WD ˘N .! @' /2 C m C "b.'; x/ jH ; (2.4.3) N
are invertible for most values of the parameters, where ˘N denotes the projection on the finite-dimensional subspace 8 9 < = X (2.4.4) HN WD h.'; x/ D h`;j ei.`'Cj x/ : : ; j.`;j /jN
2. The inverse L1 N satisfies, for some > 0, s1 > 0, tame estimates as kL1 khks C kbks khks1 ; 8h 2 HN ; 8s s1 ; (2.4.5) N hks C.s/N where k ks denotes the Sobolev norm in (1.3.1). Remark 2.4.3. Note that > 0 represents in (2.4.5) a loss of derivatives due to the small divisors. Since the multiplication operator h 7! bh satisfies a tame estimate like (2.4.5) with D 0, the estimate (2.4.5) means that L1 N acts, on the Sobolev scale Hs , somehow as an unbounded differential operator of order . Also weaker tame estimates as &s 0 .N C kbks /khks1 C N &s1 khks ; kL1 N hks C.s/N
8h 2 HN ;
8s s1 ; (2.4.6) for & < 1, are sufficient for the convergence of the Nash–Moser scheme. Note that such conditions are much weaker than (2.4.5) because the loss of derivatives 0 C &s increases with s. Conditions like (2.4.6) are essentially optimal for the convergence, compare with Lojasiewicz and Zehnder [94]. Remark 2.4.4. It would bealso sufficient to prove the existence of a right inverse of 2 the operator ŒL2N WD ˘ / C m C "b.'; x/ satisfying (2.4.5) or .! @ N ' N jH2N (2.4.6), as we do in Section 5.6. Recall that a linear operator acting between finitedimensional vector spaces admits a right inverse if it is surjective, see Definition 4.3.11. In the time-periodic setting, i.e., D 1, we establish the stronger tame estimates (2.4.5) whereas, in the time-quasiperiodic setting, i.e., 2, we prove the weaker tame estimates (2.4.6). Actually, in the present monograph, as in [23] and [24], we shall prove that the approximate inverse satisfies tame estimates of the weaker form (2.4.6).
44
2 KAM for PDEs and strategy of proof
In order to achieve (2.4.5) or (2.4.6) there are two main steps: 1. (L2 estimates) Impose lower bounds for the eigenvalues of the self-adjoint operator (2.4.3), for most values of the parameters. These first-order Melnikov 2 nonresonance conditions provide estimates of the inverse of L1 N in L -norm. 2. (Hs estimates) Prove the estimates (2.4.5) or (2.4.6) in high Sobolev norms. In the language of Anderson-localization theory, this amounts to proving polynomially fast off-diagonal decay estimates for the inverse matrix L1 N . In the forced case when the frequency vector ! 2 R provides independent parameters, item 1 is not too difficult a task, using results concerning the eigenvalues of self-adjoint matrices depending on parameters, as Lemmata 5.8.1 and 5.8.2. On the other hand, in the autonomous case, for the difficulties discussed in Remark 2.4.1, this is a more delicate task (that we address in this monograph). In the sequel we concentrate on the analysis of item 2. An essential ingredient is the decomposition into “singular” and “regular” sites, for some > 0, n o ˇ ˇ S WD .`; j / 2 Z Zd such that ˇ.! `/2 C jj j2 C mˇ < n o (2.4.7) ˇ ˇ R WD .`; j / 2 Z Zd such that ˇ.! `/2 C jj j2 C mˇ : It is clear, indeed, that in order to achieve (2.4.5), conditions that limit the quantity of singular sites have to be fulfilled, otherwise the inverse operator L1 N would be “too big” in any sense and (2.4.6) would be violated. Remark 2.4.5. If in (2.4.1) the constant m is replaced by a (not small) multiplicative potential V .x/ (this is the case considered in the present monograph), it is natural to define the singular sites as j .! `/2 C jj j2 C mj < ‚ for some ‚ large depending on V .x/, where m is the mean value of V .x/. We first consider the easier case D 1 (time-periodic solutions).
2.4.1 Time-periodic case The existence of time-periodic solutions for NLW equations on Td were first obtained in [37] in an analytic setting. In the exposition below we follow [22], which works in the context of Sobolev spaces. The following “separation properties” of the singular sites (2.4.7) are sufficient for proving tame estimates as (2.4.5): the singular sites S in the box ŒN; N 1Cd are partitioned into disjoint clusters ˛ , [ S \ ŒN; N 1Cd D ˛ ; (2.4.8) ˛
2.4 The multiscale approach to KAM for PDEs
45
satisfying (H1) (Dyadic) M˛ 2m˛ , 8˛, where M˛ WD min
.`;j /2 ˛
max j.`; j /j and m˛ WD
.`;j /2 ˛
j.`; j /j;
(H2) (Separation at infinity) There is ı D ı.d / > 0 (independent of N ) such that d.˛ ; ˇ / WD
min
.`;j /2 ˛ ;.`0 ;j 0 /2
j.`; j /.`0 ; j 0 /j .M˛ CM /ı ;
8˛ ¤ : (2.4.9)
Note that in (H2) the clusters ˛ of singular sites are separated at infinity, namely the distance between distinct clusters increases when the Fourier indices tend to infinity. A partition of the singular sites as (2.4.8), satisfying (H1)–(H2), has been proved in [22] assuming that ! 2 is Diophantine. Remark 2.4.6. We require that the function b.'; x/ in (2.4.1) has the same (high) regularity in .'; x/, i.e., kbks < 1 for some large Sobolev index s, because the clusters ˛ in (2.4.9) are separated in time-space Fourier indices only. This implies, in KAM applications, that the solutions that we obtain will have the same high Sobolev regularity in time and space. We have to solve the linear system
LN g D h;
g; h 2 HN :
(2.4.10)
Given a subset of ŒN; N 1Cd Z Zd we denote by H the vector space spanned by fei.`'Cj x/ ; .`; j / 2 g and by … the corresponding orthogonal projector. With this notation the subspace HN in (2.4.4) coincides with HŒN;N 1Cd . Given a linear operator L of HN and another subset 0 ŒN; N 1Cd we denote
L
0 WD …H0 LjH . For simplicity we also set L 0 WD …H0 .LN /jH . Let us for simplicity of notation still denote by S; R the sets of the singular and regular sites (2.4.7) intersected with the box ŒN; N 1Cd . According to the splitting ŒN; N 1Cd D S [ R, we introduce the orthogonal decomposition
HN D HS ˚ HR : Writing the unique decomposition g D gS C gR , gR 2 HR , gS 2 HS , and similarly for h, the linear system (2.4.10) then amounts to ( S LR R gR C LR gS D hR R LS gR C LSS gS D hS : R Note that by (2.4.2), the coupling terms LSR D "TRS , LR S D "TS , have polynomial off-diagonal decay.
46
2 KAM for PDEs and strategy of proof
By standard perturbative arguments, the operator LR R , which is the restriction of 1 LN to the regular sites, is invertible for " 1, and therefore solving the first equation in gR , and inserting the result in the second one, we are reduced to solve R 1 S R 1 LR gS D hS LR hR : LSS LR S LR S LR Thus the main task is now to invert the self-adjoint matrix R 1 S LR ; U W HS ! HS : U WD LSS LR S LR This reduction procedure is sometimes referred as a resolvent identity. According to (2.4.8) we have the orthogonal decomposition HS D ˚˛ H ˛ , which induces a block decomposition for the operator 1 R R ˛ ˛ ; U
˛ WD L
L
U D U
˛
L LR R : ˛;
Then we decompose U in block-diagonal and off-diagonal parts: U D D C R; D WD Diag˛ U
˛˛ ; R WD U
˛ :
(2.4.11)
˛¤
Since the matrix T has off-diagonal decay (see (2.4.2) and recall that the function b.'; x/ is smooth enough) and the matrices with off-diagonal decay form an algebra (with interpolation estimates) it easily results an off-diagonal decay of the matrix R like " C.s/ ˛ ; ˛¤: (2.4.12) U 2 1Cd L.L / d.˛ ; /s 2 In the simplest case that the ! are independent parameters (forced case), it is not too difficult to impose that each self-adjoint operator U
˛˛ is invertible for most parameters and that ˛ 1 U 2 M˛ ; 8˛ ; ˛ L.L /
for some large enough. Since, by (H1), each cluster ˛ is dyadic, this L2 estimate also implies a Hs -Sobolev bound: for all h 2 H ˛ , s ˛ 1 ˛ 1 s M s M khkL2 M˛ M khks U h M h U ˛ ˛ ˛ 2 ˛ ˛ ms ˛ s
˛
L
2s N khks by (H1) and M˛ N . As a consequence, the whole operator D defined in (2.4.11) is invertible with a Sobolev estimate kD 1 hks C.s/N khks ;
8h 2 HN :
Finally, using the off-diagonal decay (2.4.12) and the separation at infinity property (2.4.9), the operator D 1 R is bounded in L2 , and it is easy to reproduce a Neumannseries argument to prove the invertibility of U D D .I C D 1 R/ with tame estimates for the inverse U 1 in Sobolev norms, implying (2.4.5).
2.4 The multiscale approach to KAM for PDEs
47
2.4.2 Quasiperiodic case In the quasiperiodic setting, i.e., 2, the proof that the operator LN defined in (2.4.3) is invertible and its inverse satisfies the tame estimates (2.4.6) is more difficult. Indeed the separation at infinity property (H2) never holds in the quasiperiodic case, not even for finite-dimensional systems. For example, the operator ! @' , is represented in the Fourier basis as the diagonal matrix Diag`2Z .i! `/. If the frequency vector ! 2 R is Diophantine, then the singular sites ` 2 Z such that j! `j are “uniformly distributed” in a neighborhood of the hyperplane ! ` D 0, with nearby indices at distance O. ˛ / for some ˛ > 0. Therefore, unlike in the timeperiodic case, the decomposition (2.4.7) into singular and regular sites of the unperturbed linear operator is not sufficient, and a finer analysis has to be performed. In the sequel we follow the exposition of [23]. Let A D D C "T; D WD Diag.`;j /2Z Zd .! `/2 C jj j2 C m ; b ``0;j j 0 ; T WD b be the self-adjoint matrix in (2.4.2) that represents the second-order operator (2.4.1). We suppose (forced case) that ! and " are unrelated and that ! is constrained to a fixed Diophantine direction 0 1 3 (2.4.13) ; ; j!N `j ; 8` 2 Z n f0g : ! D !; N 2 2 2 j`j 0 A way to deal with the lack of separation properties at infinity of the singular sites, is to implement an inductive procedure. By introducing a rapidly increasing sequence .Nn / of scales, defined by h ni (2.4.14) Nn D N0 ; n 0 ; the aim is to obtain, for most parameters, off-diagonal decay estimates for the inverses A1 Nn of the restrictions ANn WD ˘Nn AjHNn : (2.4.15) In (2.4.14) the constants N0 and are chosen large enough. The estimates at step n (for matrices of size Nn ) partially rely on information about the invertibility and the off-diagonal decay of “most” inverses A1 Nn1 ;`0 ;j0 of submatrices ANn1 ;`0;j0 WD Aj``0 jNn1 ;jj j0 jNn1 of size Nn1 . This program suggests the name multiscale analysis.
48
2 KAM for PDEs and strategy of proof
In order to give a precise statement, we first introduce decay norms. Given a ma0 trix M D .Mii /i 0 2B;i 2C , where B; C are subsets of ZCd , we define its s-norm X jM j2s WD ŒM.n/2hni2s where hni WD max.1; jnj/ ; n2ZCd
and
( ŒM.n/ WD
max
i i 0 Dn;i 2C;i 0 2B
0
0
jMii j if n 2 C B if n … C B :
Remark 2.4.7. The norm jT js of the matrix that represents the multiplication operator for the function b.'; x/ is equal to jT js D kbks . The product of matrices (when it makes sense) with finite s-norm, satisfy algebra and interpolation inequalities, see Appendix B.1. We now outline how to prove that the finite-dimensional matrices ANn in (2.4.15) are invertible for “most” parameters 2 Œ1=2; 3=2 and satisfy, for all n 0, ˇ 1 ˇ ˇA ˇ C.s/N 0 N &s C kbks ; & 2 .0; 1=2/; 0 > 0; 8s > s0 : (2.4.16) n n Nn s Such estimates imply the off-diagonal decay of the entries of the inverse matrix 0 &s ˇ ˇ Nn Nn C kbks ˇ 1 i ˇ ; ji j; ji 0 j Nn ; ˇ ANn i 0 ˇ C.s/ hi i 0 is and Sobolev tame estimates as (2.4.6) (by Lemma B.1.9), assuming that kbks1 C . In order to prove (2.4.16) at the initial scale N0 we impose that, for most parameters, the eigenvalues of the diagonal matrix D satisfy ˇ ˇ ˇ.! `/2 C jj j2 C mˇ N ; 8j.`; j /j N0 : 0 Then, for " small enough, we deduce, by a direct Neumann series argument, the invertibility of AN0 D DN0 C "TN0 and the decay of the inverse A1 N0 . In order to proceed at the higher scales, we use a multiscale analysis. L2 bounds. The first step is to show that, for “most” parameters, the eigenvalues of ANn are in modulus bounded from below by O.Nn / and so the L2 -norm of the inverse satisfies 1 A D O.N / where k k0 WD k kL.L2 / : (2.4.17) Nn 0 n The proof is based on eigenvalue variation arguments. Recalling (2.4.13), dividing ANn by 2 , and setting WD 1=2 , we observe that the derivative with respect to satisfies, for some c > 0, @ ANn D Diagj.`;j /jNn jj j2 C m C O "kT k0 C "k@ T k0 c ; (2.4.18)
49
2.4 The multiscale approach to KAM for PDEs
for " small, i.e., it is positive definite. So, the eigenvalues `;j ./ of the self-adjoint matrix ANn satisfy (see Lemma 5.8.1) @ `;j ./ c > 0;
8j.`; j /j Nn ;
which implies (2.4.17) except in a set of ’s of measure O.Nn Cd C /. Remark 2.4.8. Monotonicity arguments for proving lower bounds for the moduli of the eigenvalues of (huge) self-adjoint matrices have been also used in [24, 61] and [23]. Note that the eigenvalues could be degenerate for some values of the parameters. This approach to verify “large deviation estimates” is very robust and the measure estimates that we perform at each step of the iteration are not inductive, as those in [42]. Multiscale Step. The bounds (2.4.16) for the decay norms of A1 Nn follow by an inductive application of the multiscale step proposition B.2.4, that we now describe. Cd , with diam.E/ 4N is called N -good if it is A matrix A 2 ME E, E Z invertible and ˇ 1 ˇ ˇA ˇ N 0 C&s ; 8s 2 Œs0 ; s1 ; s for some s1 WD s1 .d; / large. Otherwise we say that A is N -bad. The aim of the multiscale step is to deduce that a matrix A 2 ME E with diam.E/ N 0 WD N
with
1;
(2.4.19)
is N 0 -good, knowing 0
(H1) (Off-diagonal decay) jA Diag.A/js1 ‡ where Diag.A/ WD .ıi;i 0 Aii /i;i 0 2E ; (H2) (L2 bound) kA1 k0 .N 0 / , where we set k k0 WD k kL.L2 / ; and suitable assumptions concerning the N -dimensional submatrices along the diagonal of A. Definition 2.4.9 (.A; N /-bad sites). An index i 2 E is called 1. .A; N /-regular if there is F E containing i such that diam.F / 4N , d.i; EnF / N=2 and AF F is N -good; 2. .A; N /-bad if it is singular (i.e., jAii j < ) and .A; N /-regular. We suppose the following mild separation properties for the .A; N /-bad sites. (H3) (Separation properties) There is a partition of the .A; N /-bad sites B D [˛ ˛ with diam.˛ / N C1 ; for some C1 WD C1 .d; / 2.
d.˛ ; ˇ / N 2 ;
8˛ ¤ ˇ ;
(2.4.20)
50
2 KAM for PDEs and strategy of proof
The multiscale step proposition B.2.4 deduces that A is N 0 -good and (2.4.16) holds at the new scale N 0 , by (H1)-(H2)-(H3), with suitable relations between the constants , C1 , & , s1 , . Roughly, the main conditions on the exponents are C1 < &
and s1 :
The first means that the size N C1 of any bad cluster ˛ is small with respect to the size N 0 WD N of the matrix A. The second means that the Sobolev regularity s1 is large enough to “separate” the resonance effects of two nearby bad clusters ˛ , ˇ . Separation properties. We apply the multiscale step Proposition B.2.4 to the matrix ANnC1 . The key property to verify is (H3). A first key ingredient is the following covariance property: consider the family of infinite-dimensional matrices A. / WD D. / C "T; D. / WD diag.`;j /2Z Zd .! ` C /2 C jj j2 C m ; and its .2N C 1/Cd restrictions AN;`0 ;j0 . / WD Aj``0 jN;jj j0 jN . / centered at any .`0 ; j0 / 2 Z Zd . Since the matrix T in (2.4.2) is Toeplitz we have the covariance property AN;`0 ;j0 . / D AN;j0 ;0 . C ! `0 / :
(2.4.21)
In order to deduce (H3), it is sufficient to prove the separation properties (2.4.20) for the Nn -bad sites of A, namely the indices .`0 ; j0 / that are singular ˇ ˇ ˇ.! `0 /2 C jj0 j2 C mˇ ; (2.4.22) and for which there exists a site .`; j /, with j.`; j /.`0 ; j0 /j N , such that ANn ;`;j is Nn -bad. Such separation properties are obtained for all the parameters that are Nn -good, namely such that ˚ 8 j0 2 Zd ; BNn .j0 I / WD 2 RW ANn ;0;j0 . / is Nn -bad [ Iq where Iq are intervals with jIq j Nn : C.d;/
qD1;:::;Nn
(2.4.23) We first use the covariance property (2.4.21) and the “complexity” information (2.4.23) to bound the number of bad time-Fourier components. Indeed ANn ;`0;j0 is Nn -bad ” ANn ;0;j0 .! `0 / is Nn -bad ” ! `0 2 BNn .j0 I / : Then, using that ! is Diophantine, the complexity bound (2.4.23) implies that, for each fixed j0 , there are at most O.NnC.d;/ / sites .`0 ; j0 / in the larger box j`0 j NnC1 that are Nn -bad. Next, we prove that an Nn2 -chain of singular sites, i.e., a sequence of distinct integer vectors .`1 ; j1 /; : : : ; .`L ; jL / satisfying (2.4.22) and j`qC1 `q j C jjqC1 jq j Nn2 ;
q D 1; : : : ; L ;
2.4 The multiscale approach to KAM for PDEs
51
which are also Nn -bad, has a “length” L bounded by L NnC1 .d;/ . This implies a partition of the .ANnC1 ; Nn /-bad sites as in (2.4.20) at order Nn . In this step we require that ! 2 R satisfies the quadratic Diophantine condition ˇ ˇ ˇ ˇ X .C1/ 2 ˇ ˇ n C ! ! p ; 8p WD .pij / 2 Z 2 ; ˇ i j ij ˇ ˇ ˇ jpj2 1i j
8n 2 Z; .p; n/ ¤ .0; 0/ ;
(2.4.24)
for some positive 2 ; 2 . Remark 2.4.10. The singular sites (2.4.22) are integer vectors close to a “cone” and (2.4.24) can be seen as an irrationality condition on its slopes. For NLS equations, (2.4.24) is not required, because the corresponding singular sites .`; j / satisfy j! `Cjj j2 j C , i.e., they are close to a paraboloid. We refer to Berti and Maspero [31] to avoid the use of the quadratic Diophantine condition (2.4.24) for NLW equations. Measure and complexity estimates. In order to conclude the inductive proof we have to verify that “most” parameters are Nn -good, according to (2.4.23). We prove first that, except a set of measure O.Nn1 /, all parameters 2 Œ1=2; 3=2 are Nn -good in an L2 sense, namely o n 0 .j0 I / WD 2 RW kA1 8 j0 2 Zd ; BN Nn ;0;j0 . /k0 > Nn n [ Iq where Iq are intervals with jIq j Nn : (2.4.25) C.d;/
qD1;:::;Nn
The proof is again based on eigenvalue variation arguments as in (2.4.18), using that C m is positive definite. Finally, the multiscale Proposition step B.2.4, and the fact that the separation properties of the Nn -bad sites of A. / hold uniformly in 2 R, imply inductively that the parameters that are Nn -good in an L2 sense, are actually Nn -good, i.e., (2.4.23) holds, concluding the inductive argument.
2.4.3 The multiscale analysis of Chapter 5 In this monograph we consider autonomous NLW equations (1.1.1), whose smallamplitude quasiperiodic solutions of frequency vector ! correspond to solutions of (1.2.5). The quasiperiodic linear operator .! @' /2 C V .x/ C "2 .@u g/.x; u.'; x//;
' 2 T ; x 2 Td ;
(2.4.26)
is obtained by linearizing (1.2.5) at an approximate solution u. The analysis of this operator, acting in a subspace orthogonal to the unperturbed linear ‘tangential” solu-
52
2 KAM for PDEs and strategy of proof
tions (see (1.2.31)) ( ) X ˛j cos.'j /‰j .x/; ˛j 2 R Ker .N @' /2 C V .x/ ; j 2S
turns out to be much more difficult than in the forced case; it requires considerable extra work with respect to the previous section. Major difficulties are the following: (i) The frequency vector ! and the small parameter " are linked in (2.4.26), see the discussion in Remark 2.4.1. In particular ! has to be "2 -close to the unperturbed frequency vector N D .j /j 2S in (1.2.3). More precisely ! varies approximately according to the frequency-to-action map (1.2.11), and thus it has to belong to the region of admissible frequencies (1.2.23). Moreover the multiplicative function .@u g/.x; u.'; x// D 3a.x/.u.'; x//2 C in (2.4.26) (recall the form of the nonlinearity (1.2.2)) depends itself on !, via the approximate quasiperiodic solution u.'; x/, and, as the tangential frequency vector ! changes, the normal frequencies also undergo a significant modification. At least for finitely many modes, the shift of the normal frequencies due to the effect of 3a.x/.u.'; x//2 can be approximately described in terms of the Birkhoff matrix B in (1.2.9) as j C "2 .B /j (see (2.5.24)). Because of all these constraints, positivity properties like (2.4.18) in general fail, see (2.5.26) and Remark 2.5.3. This implies difficulties for imposing nonresonance conditions and for proving suitable complexity bounds. (ii) An additional difficulty in (2.4.26) is the presence of the multiplicative potential V .x/, which is not diagonal in the Fourier basis. In this monograph we shall still be able to use positivity arguments for imposing nonresonance conditions and proving complexity bounds along the multiscale analysis. This requires us to write (1.2.1) as a first-order Hamiltonian system and to perform a block-decomposition of the corresponding first-order quasiperiodic linear operator acting in the normal subspace. We refer to Section 2.5 for an explanation about this procedure (which is necessarily technical) and here we limit ourselves to describing the quasiperiodic operators that we shall analyze in Chapter 5 with multiscale techniques. The multiscale Proposition 5.1.5 of Chapter 5 provides the existence of a right inverse of finite-dimensional restrictions of self-adjoint linear operators like Lr in (2.4.27) and Lr; in (2.4.28), which have the following form: e ƒ, 1. for ! D .1 C "2 /!N " , 2 ƒ
Lr D J ! @' C DV C r."; ; '/ 2 ; acting on h 2 L2 TjSj Td ; R
(2.4.27)
2.4 The multiscale approach to KAM for PDEs
where DV D
p
C V .x/;
53
0 Id J is the symplectic matrix J D , acting in .L2 .TjSj Td ; R//2 Id 0 by J.h1 ; h2 / D .h2 ; h1 /;
For each ' 2 TjSj , r."; ; '/ is a (self-adjoint) linear operator of .L2 .Td ; R//2 . As an operator acting in .L2 .TjSj Td ; R//2 , the family .r."; ; '//'2TjSj maps h to g such that g.'; / D r."; ; '/h.'; / I it is self-adjoint and the entries of its matrix representation (in the Fourier basis) are 2 2 matrices and have a polynomial off-diagonal decay. A quasiperiodic linear operator of the form Lr arises by writing the linear wave operator (2.4.26) as a first-order system as in (2.3.6) and (2.3.8) (using real variables instead of complex ones). We constrain the frequency vector ! D .1 C "2 /!N " to a straight line, because, in this way, we will be able to prove that, for “most” values of , these self-adjoint operators have eigenvalues different from zero. e ƒ, 2. For ! D .1 C "2 /!N " , 2 ƒ
Lr; D J ! @' C DV C ."; /J …G C r."; ; '/ 4 acting on h 2 L2 TjSj Td ; R ;
(2.4.28)
where J h WD J hJ and the left/right action of J on R4 are defined in (5.1.3) and (5.1.4); ."; / is a scalar; …G is the L2 -orthogonal projector in the space variable on the infinite dimensional subspace ? HG WD HS[F ; ( ) X HS[F WD .Qj ; Pj /‰j .x/; .Qj ; Pj / 2 R2 .L2 .Td //2 j 2S[F
(the set G N is defined in (1.2.15)): the action of …G in .L2 .TjSj Td ; R//4 maps h D .h.1/ ; h.2/ / 2 .L2 .TjSj Td ; R//2 .L2 .TjSj Td ; R//2 to g D .g .1/ ; g .2/ / such that g .i / .'; / D …G h.i / .'; /;
i D 1; 2 I
54
2 KAM for PDEs and strategy of proof
r."; ; '/, acting in .L2 .TjSj Td ; R//4 , is a self-adjoint operator; the entries of its matrix representation (in the Fourier basis) are 44 matrices and have a polynomial off-diagonal decay. Quasiperiodic linear operators of the form Lr; will arise in Chapter 10 in the process of block diagonalizing a quasiperiodic linear operator as Lr with respect to the splitting HS[F ˚ HG (low-high frequencies). The scalar ."; / will be (approximately) one of the finitely many eigenvalues of the restriction of Lr to the subspace HF . We explain in Section 2.5 why (and how) we perform this block diagonalization. Notice that the operators Lr and Lr; defined in (2.4.27) and (2.4.28) are defined for all e ƒ (which may shrink during the Nash–Moser belonging to a subset ƒ iteration); Lr and Lr; are first-order vector-valued quasiperiodic operators, unlike (2.4.26), which is a second-order scalar quasiperiodic operator (it acts in “configuration space”). The operator (2.4.27) arises (essentially) by writing the linear wave operator (2.4.26) as a first-order system as in (2.3.6), where the nondiagonal vector field is 1-smoothing. Therefore it is natural to require that the operators r D r."; ; '/ in (2.4.27) and (2.4.28) satisfy the decay condition ˇ 1 1ˇ p ˇ ˇ jrjC;s1 WD ˇDm2 rDm2 ˇ D O."2 / where Dm WD C m (2.4.29) s1
for some m > 0. A key property of the operators Lr and Lr; , that we shall be able to verify at each step of the Nash–Moser iteration, is the monotonicity property with respect to the parameter , in item 3 of Definition 5.1.2. We first introduce the following notation: given a family .A.//2e ƒ of linear self-adjoint operators, d A./ cId
”
A.2 / A.1 / c Id; 2 1
81 ¤ 2 ;
where A c Id means as usual .Aw; w/L2 ckwk2L2 . For Lr the condition of monotonicity takes the form
DV C r c "2 Id; c > 0 : d 1 C "2
(2.4.30)
(2.4.31)
The condition for Lr; is similar. Such property allows us to prove the measure estimates stated in properties 1–3 of Proposition 5.1.5. All the precise assumptions on the operators Lr in (2.4.27) and Lr; in (2.4.28) are stated in Definition 5.1.2 (we neglect in (2.4.27) and (2.4.28) the projector c…S , which has a purely technical role, as we comment in Remark 5.1.3).
2.4 The multiscale approach to KAM for PDEs
55
Remark 2.4.11. We shall use the results of the multiscale Proposition 5.1.5 about the approximate invertibility of the operator Lr in (2.4.27) in Chapter 11, and for the operator Lr; defined in (2.4.28) in Chapter 10. In Section 2.5 we shall describe their role in the proof of Theorem 1.2.1. The application of multiscale techniques to the operators (2.4.27) and (2.4.28) is more involved than for the second-order quasiperiodic operator (2.4.1) presented in the previous section. We finally describe the main steps for the proof of the multiscale Proposition 5.1.5. This may serve as a road-map for the technical proof given in Chapter 5. For definiteness, we consider Lr; . In the Fourier exponential basis, the operator Lr; is represented by a self-adjoint matrix A D D! C T (2.4.32) with a diagonal part (see (5.2.4)) 0 1 hj im i! ` 0 0 0 0 B i! ` hj im C ; (2.4.33) D! D Diag.`;j /2ZjSjCd @ 0 0 hj im C i! ` A 0 0 i! ` hj im C where, for some m > 0, hj im WD
p
jj j2 C m
and WD ."; / 2 R. The matrix T in (2.4.32) is 0 0 T WD T``;j;j ; jSjCd 0 0 jSjCd .`;j /2Z
0 0 T``;j;j
WD .DV
j0 Dm / j
;.` ;j /2Z
j0
`0 ;j 0
ŒJ …S[F j C r`;j 2 Mat.4 4/ : 0
0
The matrix T is Toeplitz in `, namely T``;j;j depends only on the indices ` `0 ; j; j 0 . The infinitely many eigenvalues of the matrix D! are p jj j2 C m ˙ ˙ ! `; j 2 Zd ; ` 2 ZjSj : By Proposition 4.4.1 and Lemma 4.3.8, the matrix T satisfies the off-diagonal decay ˇ 1=2 1=2 ˇ TDm ˇs < C1 ; (2.4.34) jTjC;s1 WD ˇDm 1
cf. with (2.4.29) and (5.2.6). We introduce the index a 2 I WD f1; 2; 3; 4g to distinguish each component in the 4 4 matrix (2.4.33). Then, the singular sites .`; j; a/ 2 ZjSj Zd I of Lr; are those integer vectors such that, for some choice of the signs 1 .a/; 2.a/ 2 f1; 1g, ˇp ˇ ‚ ˇ ˇ : (2.4.35) ˇ jj j2 C m C 1 .a/ C 2 .a/! `ˇ < p jj j2 C m
56
2 KAM for PDEs and strategy of proof
In (2.4.35) the constant ‚ WD ‚.V / is chosen large enough depending on the multiplicative potential V .x/ (which is not small). Note that if .`; j; a/ is singular, then we recover the second-order bound ˇ 2 ˇ ˇjj j C m .1 .a/ C 2 .a/! `/2 ˇ C ‚ ; which is similar to (2.4.22). The invertibility of the restricted operator Lr;;N WD ˘N .Lr;/jHN and the proof of the off-diagonal decay estimates (5.1.23) is obtained in Section 5.7 by an inductive application of the multiscale step Proposition 5.3.4, which is deduced from the corresponding Proposition B.2.4. On the other hand, Section 5.6 contains the proof of the existence of a right inverse of ŒLr;2N N WD ˘N .Lr; /jH2N satisfying (5.1.20) at the small scales N < N."/ (note that the size of N depends on "). In view of the multiscale proof, we consider the family of operators
Lr; . / WD J ! @' C DV C iJ C J …G C r;
2 R;
(2.4.36)
which is represented, in Fourier basis, by a self-adjoint matrix denoted A. /. The monotonicity assumption in item 3 of Definition 5.1.2, see (2.4.31), allows us to obtain effective measure estimates and complexity bounds similar to (2.4.25). Note however that By (2.4.31) the eigenvalues of Lr;;N vary in with only a O."2 /-speed, creating further difficulties with respect to the previous section. The verification of assumption (H3) of the multiscale step Proposition 5.3.4, concerning separation properties of the N -bad sites, is a key part of the analysis. The new notion of N -bad site is introduced in Definition 5.4.2, according to the new definition of an N -good matrix introduced in Definition 5.3.1 (the difference is due to the fact that the singular sites are defined as in (2.4.35), and (2.4.34) holds). Then, in Section 5.4, we prove that a -chain of singular sites is not too long, under the bound (5.4.9) for its time components, see Lemma 5.4.8. Finally in Lemma 5.4.15 we are able to conclude, for the parameters , which are N -good, i.e., which satisfy n o [ 8 j0 2 Zd ; BN .j0 I / WD 2 RW AN;0;j0 . / is N -bad Iq ; where Iq are intervals with measure
qD1;:::;N C.d;jSj;0 / jIq j N (2.4.37)
(see Definition 5.4.4), an upper bound L N C3 (see (5.4.40)) for the length L of a N 2 -chain of N -bad sites. This result implies the partition of the N -bad sites into clusters separated by N 2 , according to the assumption (H3) of the multiscale step Proposition 5.3.4. Remark 2.4.12. Lemma 5.4.15 requires a significant improvement with respect to the arguments in [23], described in the previous section: the exponent in (2.4.37) is large, but independent of , which defines the new scale N 0 D N in the multiscale step (see Remark 5.4.6). This improvement is required by the fact that by (2.4.31) the eigenvalues of Lr;;N vary in with only a O."2 /-speed.
2.5 Outline of the proof of Theorem 1.2.1
57
We finally comment on why the multiscale analysis works also for operators Lr , Lr; of the form (2.4.27) and (2.4.28), where DV is not a Fourier multiplier. The reason is similar to [24] and [23]. In Lemma 5.7.3 we have to prove that all the parameters 2 ƒ are N -good (Definition 5.4.4) at the small scales N N0 . We proceed as follows. We regard the operator Lr; . / in (2.4.36) as a small perturbation of the operator
L0; . / D J ! @' C DV C iJ C J …G ; which is '-independent. Thus a lower bound on the modulus of the eigenvalues of its restriction to HN implies an estimate in L2 -norm of its inverse, and thanks to separation properties of the eigenvalues jj j2 of , also a bound of its s-decay norm 0 as O.N C&s /. By a Neumann series perturbative argument this bound also persists for ˘N Lr; . /jHN , taking " small enough, up to the scales N N0 . The proof is done precisely in Lemmata 5.7.2 and 5.7.3. The set of such that the spectrum of ˘N L0; . /jHN is at a distance O.N / from 0 is contained into a union of intervals like (2.4.37), implying the claimed complexity bounds. The proof at higher scales follows by the induction multiscale process.
2.5 Outline of the proof of Theorem 1.2.1 In this section we present in detail the plan of proof of Theorem 1.2.1, which occupies Chapters 3–12. This section is a road map through the technical aspects of the proof. In Chapter 3 we first write the second-order wave equation (1.2.1) as the firstorder Hamiltonian system (3.1.13) and (3.1.14), 8 .jSj C d /=2 is fixed in (1.3.2) to have the algebra and interpolation properties for all the Sobolev spaces Hs , s s0 , defined in (1.3.1). We prove these properties in Section 4.5. The index s1 s0 is the one required for the multiscale Proposition 5.1.5 (see hypothesis 1 in Definition 5.1.2) to have the sufficient off-diagonal decay to apply the multiscale step Proposition 5.3.4, see in particular (5.3.6) and assumption (H1). It is the Sobolev threshold that defines the “good” matrices in Definition 5.3.1. Another condition of the type s1 s0 appears in the proof of Lemma 11.2.6. Then, the Sobolev index s2 s1 is used in the splitting step Proposition 10.1.1, see (9.2.1). This proposition is based on a Nash–Moser iterative scheme, where s2 represents a “high-norm”, see (9.2.2). The largest index s3 s2 is finally used for the convergence of the Nash–Moser nonlinear iteration in Chapter 12. Note that in Theorem 12.2.1 the divergence of the approximate solutions in in the high norm k kLip;s3 is under control. We require in particular (9.2.1) and, in Section 12.2, also stronger largeness conditions for s3 s2 . Along the iteration we shall verify (impose) that the Sobolev norms of the approximate quasiperiodic solutions remain bounded in k kLip;s1 and k kLip;s2 norms (so that the assumptions of Propositions 5.1.5, 9.2.1, and 10.1.1 are fulfilled).
Chapter 3
Hamiltonian formulation In this chapter we write the NLW equation (1.2.1) as a first-order Hamiltonian system in action-angle and normal variables. These coordinates are convenient for the proof of Theorem 1.2.1. In Section 3.3 we provide estimates of the measure of the admissible Diophantine directions !N " of the frequency vector ! D .1 C "2 /!N " of the quasiperiodic solutions of Theorem 1.2.1.
3.1 Hamiltonian form of NLW equation We write the second-order NLW equation (1.2.1) as the first-order system ( ut D v vt D u V .x/u "2 g."; x; u/ ;
(3.1.1)
which is the Hamiltonian system @t .u; v/ D XH .u; v/;
XH .u; v/ WD J rL2 H.u; v/ ;
where J WD
0 Id Id 0
(3.1.2)
;
(3.1.3)
is the symplectic matrix, and H is the Hamiltonian Z .ru/2 u2 v2 H.u; v/ WD C C V .x/ C "2 G."; x; u/ dx : 2 2 Td 2 In the above definition of H , G denotes a primitive of g in the variable u: Z u g."; x; s/ ds; .@u G/."; x; u/ D g."; x; u/ : G."; x; u/ WD
(3.1.4)
0
For the sequel it is convenient to also highlight the fourth order term of the nonlinearity g.x; u/ in (1.1.2), i.e., writing g.x; u/ D a.x/u3 C a4 .x/u4 C g5 .x; u/; g5 .x; u/ D O u5 ; (3.1.5) so that the rescaled nonlinearity g."; x; u/ in (1.2.2) has the expansion g."; x; u/ WD "3 g.x; "u/ D a.x/u3 C "a4 .x/u4 C "2 r."; x; u/ ; r."; x; u/ WD "5 g5 .x; "u/ ;
(3.1.6)
70
3 Hamiltonian formulation
and its primitive in (3.1.4) has the form G."; x; u/ D
" 1 a.x/u4 C a4 .x/u5 C "2 R."; x; u/ 4 5
(3.1.7)
where .@u R/."; x; u/ D r."; x; u/. The phase space of (3.1.2) is a dense subspace of the real Hilbert space L2 .Td / L2 Td ; L2 Td WD L2 Td ; R ; endowed with the standard constant symplectic 2-form .u; v/; .u0 ; v 0 / WD J.u; v/; .u0 ; v 0 / L2 L2 D .v; u0 /L2 .u; v 0 /L2 :
(3.1.8)
Notice that the Hamiltonian vector field XH is characterized by the relation .XH ; / D dH : System (3.1.1) is reversible with respect to the involution S.u; v/ WD .u; v/;
S 2 D Id ;
(3.1.9)
namely (see Appendix A.1) XH ı S D S ı XH ;
equivalently H ı S D H :
Notice that the above equivalence is due to the fact that the involution S is antisymplectic, namely the pull-back S D : Let us write (3.1.1) in a more symmetric form. We first introduce DV WD as the closed (unbounded) linear operator of L2 .Td / defined by p DV ‰j WD C V .x/‰j WD j ‰j ; 8j 2 N ;
p CV .x/
(3.1.10)
where f‰j .x/; j 2 Ng is the orthonormal basis of L2 .Td / formed by the eigenfunctions of C V .x/ in (1.1.5). Under the symplectic transformation 1
q D DV2 u;
1
p D DV 2 v ;
(3.1.11)
the Hamiltonian system (3.1.1) becomes 8
2"2 C
Since M is symmetric there is an orthonormal basis V WD .v1 ; : : : ; vk / of eigenvectors of M with real eigenvalues k WD k .p/, i.e., M vk D k vk . Under the isometric change of variables D Vy we have to estimate ˇ( ˇ ˇ jRn;p j D ˇ y 2 RjSj ; jyj 1W ˇ ˇ ˇ )ˇ ˇ ˇ ˇ X 0 ˇ ˇ 2 4 2ˇ k yk ˇ < ˇ n C M N N C 2" Vy M N C " ˇ : (3.3.5) ˇ ˇ 2hpi1 ˇ 1i jSj
Since M 2 vk D 2k vk , 8k D 1; : : : ; jSj, we get jSj X
2k D Tr.M 2 / D
jSj X
(3.3.2)
Mij2
i;j D1
kD1
jpj2 : 2
p Hence there is an index k0 2 f1; : : : ; jSjg such that jk0 j jpj= 2jSj and the derivative ˇ !ˇˇ ˇ X ˇ ˇ 2 2 4 ˇ@ k yk2 ˇˇ D "4 j2k0 j ˇ yk0 n C M N N C 2" Vy M N C " ˇ ˇ 1i jSj p "4 2 jpj p : (3.3.6) jSj As a consequence of (3.3.5) and (3.3.6) we deduce the measure estimate r 0 jRn;p j . "2 : hpi1 C1 Recalling (3.3.4), and since Rn;p D ; if jnj C hpi, we have ˇ ˇ ˇ ˇ r ˇ ˇ [ X 0 ˇ ˇ 2 Rn;p ˇ . " hpi ˇ ˇ ˇ hpi1 C1 1=.0 C1/ jSj.jSjC1/ ˇ ˇ 2 n;p2Z
nf0g
jpj>
.0 for " small, by (1.2.28).
0 2"2 C
1 jSj.jSjC1/ 1 2 " 0 C1 0 C1 "
76
3 Hamiltonian formulation
Remark 3.3.2. The measure of the set jB" j " is smaller than "p , for any p, at the expense of taking a larger Diophantine exponent 1 . We have written jB" j " for definiteness. We finally notice that, for ! D .1 C "2 /!N " with a Diophantine vector !N " satisfying (1.2.29), for any zero average function g.'/ we have that the function .! @' /1 g, defined in (1.3.3), satisfies .! @' /1 g C 11 kgkLip;sC1 : (3.3.7) Lip;s
Chapter 4
Functional setting In this chapter we collect all the properties of the phase spaces, linear operators, norms, and interpolation inequalities used throughout the monograph. Of particular importance for proving Theorem 1.2.1 is the result of Section 4.4.
4.1 Phase space and basis The phase space of the NLW equation (3.1.12) is a dense subspace of the real Hilbert space (4.1.1) H WD L2 Td L2 Td ; L2 Td WD L2 .Td ; R/ : We shall denote for an element of H either as h D .h.1/ ; h.2/ / a row, or
convenience .1/ h with components h.l/ 2 L2 .Td /, l D 1; 2. as a column h D h.2/ In the exponential basis any function of H can be decomposed as X .q.x/; p.x// D .qj ; pj /eij x ; qj D qj ; pj D pj : (4.1.2) j 2Zd
We will also use the orthonormal basis f‰j .x/; j 2 Ng 2
d
of L .T / formed by the eigenfunctions of C V .x/ defined in (1.1.5) with eigenvalues 2j . We then consider the Hilbert spaces ( ) X X 2 Hsx WD u WD uj ‰j W kuk2Hsx WD . C V .x//s u; u L2 D 2s j juj j < 1 : j 2N
j 2N
(4.1.3) Clearly H0x D L2 . Actually, for any s 0, the “spectral” norm kukHsx is equivalent to the usual Sobolev norm 1 kukHsx 's kukHxs WD ./s u; u L2 C .u; u/L2 2 : (4.1.4) For s 2 N, the equivalence (4.1.4) can be directly proved noting that V .x/ is a lower order perturbation of the Laplacian . Then, for s 2 R n N, Œs < s < Œs C 1, the equivalence (4.1.4) follows by the classical interpolation result stating that the Hilbert space Hsx in (4.1.3), respectively the Sobolev space Hxs , is the interpolation space ŒsC1 between HŒs , respectively between HxŒs and HxŒsC1 , and the equivalence x and Hx (4.1.4) for integers s.
78
4 Functional setting
Tangential and normal subspaces. Given the finite set S N, we consider the L2 orthogonal decomposition of the phase space in tangential and normal subspaces as in (3.2.1), H D HS ˚ HS? ; where
( HS D .q.x/; p.x// D
X
) .qj ; pj /‰j .x/; .qj ; pj / 2 R
2
:
j 2S
In addition, recalling the disjoint splitting N D S[F[G; where F; G N are defined in (1.2.14) and (1.2.15), we further decompose the normal subspace HS? as (4.1.5) HS? D HF ˚ HG ; where
Thus
o n HG WD .Q.x/; P .x// 2 H W .Q; P /?HS ; .Q; P /?HF ; o n HF WD .Q.x/; P .x// 2 H W .Q; P /?HS ; .Q; P /?HG : H D HS ˚ HS? D HS ˚ HF ˚ HG D HS[F ˚ HG :
(4.1.6)
(4.1.7)
2
Accordingly we denote by …S , …F , …G , …S[F , the orthogonal L projectors on HS , HF , HG , HS[F . We define …? S WD Id …S D …NnS and similarly for the other subspaces. Decomposing the finite-dimensional space HF D ˚j 2F Hj ;
(4.1.8)
we denote by …j the L2 -projectors onto Hj . In each real subspace Hj , j 2 F, we take the basis .‰j .x/; 0/; .0; ‰j .x//, namely we represent n o (4.1.9) Hj D q.‰j .x/; 0/ C p.0; ‰j .x//; q; p 2 R : Thus Hj is isometrically isomorphic to R2 .
79
4.2 Linear operators and matrix representation
Symplectic operator J . We define the linear symplectic operator J 2 L.H / as
Q.x/ P .x/ 0 Id J WD ; J D : (4.1.10) P .x/ Q.x/ Id 0 The symplectic operator J leaves invariant the symplectic subspaces HS , HS? , HF , HG . For simplicity of notation we shall still denote by J the restriction of the symplectic operator J WD JjHS 2 L.HS /; J WD JjH ? 2 L.HS? / ; S
J WD JjHF 2 L.HF /; J WD JjHG 2 L.HG/ : The symplectic operator J is represented, in the basis of the exponentials (see (4.1.2)) o n eij x ; 0 ; 0; eij x ; j 2 Zd ; by the matrix J D Diagj 2Zd
0 1 : 1 0
Note also that the symplectic operator J leaves invariant each subspace Hj , j 2 F, and in the eigenfunction basis (4.1.9), by the same symplectic matrix it is represented,
0 1 . 1 0
4.2 Linear operators and matrix representation According to the decomposition HS? D HF ˚ HG in (4.1.5) a linear operator A of HS? can be represented by a matrix of operators as
F AF AG F ; (4.2.1) AFG AG G AFF WD …F AjHF ;
AG F WD …F AjHG ;
AFG WD …G AjHF ;
AG G WD …G AjHG :
Moreover the decomposition (4.1.8) induces the splitting
L.HF ; H / D ˚j 2F L.Hj ; H / ;
(4.2.2)
namely a linear operator A 2 L.HF ; H / can be written as A D .aj /j 2F ;
aj WD AjHj 2 L.Hj ; H / :
(4.2.3)
80
4 Functional setting
In each space L.Hj ; H / we define the scalar product ha; bi0 WD Tr.b a/;
a; b 2 L.Hj ; H / ;
(4.2.4)
where b 2 L.H; Hj / denotes the adjoint of b with respect to the scalar product in H . Note that b a 2 L.Hj ; Hj / is represented, in the basis (4.1.9), by the 2 2 real matrix whose elements are L2 scalar products
.b.‰j ; 0/; a.‰j ; 0//H .b.‰j ; 0/; a.0; ‰j //H : (4.2.5) .b.0; ‰j /; a.‰j ; 0//H .b.0; ‰j /; a.0; ‰j //H Using the basis (4.1.9), the space of linear operators L.Hj ; H / can be identified with H H, 4 L.Hj ; H / ' H H D L2 Td ; R ; (4.2.6) identifying a 2 L.Hj ; H / with the vector .1/ .2/ .3/ .4/ a ;a ;a ;a 2H H .1/ .2/ 2 H; a.0; ‰j / DW a.3/ ; a.4/ 2 H ; a.‰j ; 0/ DW a ; a so that a q.‰j ; 0/ C p.0; ‰j / D q a.1/ ; a.2/ C p a.3/ ; a.4/ ;
(4.2.7)
8.q; p/ 2 R2 : (4.2.8)
With this identification and (4.2.5), the scalar product (4.2.4) takes the form ha; bi0 WD Tr.b a/ D
4 X .l/ .l/ a ; b L2
(4.2.9)
lD1
and the induced norm ha; ai0 D Tr.a a/ D
4 X .l/ 2 a 2
L .Td /
:
(4.2.10)
lD1
By the identification (4.2.7) and taking in H the exponential basis (4.1.2), a linear operator a 2 L.Hj ; H / can be also identified with the sequence of 2 2 matrices j .ak /k2Zd , ! .1/ .3/ X .l/ b a b a .l/ k k .C/; a D b ak eikx ; l D 1; 2; 3; 4 ; 2 Mat akj WD 2 .2/ .4/ b ak b ak d k2Z
so that, by (4.2.8), X j ak .q; p/eikx ; a q.‰j ; 0/ C p.0; ‰j / D k2Zd
8.q; p/ 2 R2 :
4.2 Linear operators and matrix representation
81
Similarly, using the basis f‰j .x/gj 2N , a linear operator a 2 L.Hj ; H / can also be j identified with the sequence of 2 2 matrices .ak /k2N ,
ajk
WD
a.1/ a.3/ k k
!
a.2/ a.4/ k k
;
.l/ a.l/ k WD ‰k ; a
L2
;
l D 1; 2; 3; 4 ;
(4.2.11)
8.q; p/ 2 R2 :
(4.2.12)
so that, by (4.2.8), X j ak .q; p/‰k .x/; a q.‰j ; 0/ C p.0; ‰j / D k2N
In addition, if a 2 L.Hj ; H / is identified with the vector .a.1/ ; a.2/ ; a.3/ ; a.4/ / 2 H H as in (4.2.7), then Ja 2 L.Hj ; H /, where J is the symplectic operator in (4.1.10), can be identified with Ja D a.2/ ; a.1/ ; a.4/ ; a.3/ ; (4.2.13) and aJ 2 L.Hj ; H / with aJ D a.1/ ; a.2/ ; a.3/ ; a.4/ J D a.3/ ; a.4/ ; a.1/ ; a.2/ :
(4.2.14)
Each space L.Hj ; H / admits the orthogonal decomposition
L.Hj ; H / D L.Hj ; HS[F / ˚ L.Hj ; HG / ;
(4.2.15)
defined, for any aj 2 L.Hj ; H /, by …S[F aj 2 L.Hj ; HS[F /; …G aj 2 L.Hj ; HG / ; (4.2.16) where …S[F and …G are the L2 orthogonal projectors respectively on the subspaces HS[F D HF ˚ HS and HG , see (4.1.5). A (possibly unbounded) linear operator A acting on the Hilbert space aj D …S[F aj C …G aj ;
2 .i/ H WD H D L2 Td ;
4 .ii/ H WD H H D L2 Td
(4.2.17)
j
can be represented in the Fourier basis of L2 .Td / by a matrix .Aj 0 /j;j 0 2Zd with j
Aj 0 2 Mat22 .C/ in case (i), respectively Mat44 .C/ in case (ii), by the relation A
X j 2Zd
! hj e
ij x
D
X
X
j 0 2Zd
j 2Zd
! Ajj 0 hj
with hj 2 C2 in case (i), respectively hj 2 C4 in case (ii).
eij
0 x
82
4 Functional setting
We decompose the space of 2 2 real matrices Mat2 .R/ D MC ˚ M ;
(4.2.18)
where MC , respectively M , is the subspace of the 2 2-matrices that commute, respectively anti-commute, with the symplectic matrix J . A basis of MC is formed by
0 1 1 0 M1 WD J D ; M2 WD Id2 D ; (4.2.19) 1 0 0 1 and a basis of M is formed by
1 0 ; M3 WD 0 1
M4 WD
0 1 : 1 0
(4.2.20)
Notice that the matrices fM1 ; M2 ; M3 ; M4 g also form a basis for the 2 2-complex matrices Mat2 .C/. We shall denote by C , the projectors on M C , respectively M . We shall use that for a 2 2 real symmetric matrix
aCc Tr.M / a b M D ; C .M / D (4.2.21) Id2 D Id2 : b c 2 2 '-Dependent families of functions and operators. In this monograph we often identify a function h 2 L2 TjSj ; L2 Td ; hW ' 7! h.'/ 2 L2 Td ; with the function h.'; /.x/ D h.'; x/ of time-space, i.e., L2 TjSj ; L2 Td L2 TjSj Td : Correspondingly, we regard a '-dependent family of (possibly unbounded) operators AW TjSj ! L.H1 ; H2 /;
' 7! A.'/ 2 L.H1 ; H2 / ;
acting between Hilbert spaces H1 ; H2 , as an operator A that acts on functions h.'; x/ 2 L2 .TjSj ; H1 / of space-time. When H1 D H2 D L2 .Td / we regard A as the linear operator AW L2 .TjSj Td / ! L2 .TjSj Td / defined by .Ah/.'; x/ WD .A.'/h.'; //.x/ : For simplicity of notation we still denote such an operator by A. Given a; bW TjSj ! L.Hj ; H / we define the scalar product Z ha; bi0 WD Tr b .'/a.'/ d' ; TjSj
(4.2.22)
(4.2.23)
4.2 Linear operators and matrix representation
83
that, with a slight abuse of notation, we denote with the same symbol (4.2.4). Using (4.2.7) we may identify a; bW TjSj ! L.Hj ; H / with a; bW TjSj ! H H defined by a.'/ D a.1/ ; a.2/ ; a.3/ ; a.4/ .'/; b.'/ D b .1/ ; b .2/ ; b .3/ ; b .4/ .'/ ; and, by (4.2.23), (4.2.9), and (4.2.10), Z ha; bi0 D
TjSj
4 X .l/ .l/ Tr a.'/ b.'/ d' D a ; b L2 .TjSj Td / ;
Z kak20 D ha; ai0 D
TjSj
(4.2.24)
lD1 4 X .l/ 2 a 2 jSj d : Tr a.'/ a.'/ d' D L .T T /
(4.2.25)
lD1
Consider a '-dependent family of linear operators A.'/, acting through (4.2.22) on '-dependent family of functions with values in the Hilbert space H defined in (4.2.17). We identify the family .A.'// with an infinite-dimensional matrix `;j `;j .A`0 ;j 0 /.`;j /;.`0;j 0 /2ZjSjCd , whose entries A`0 ;j 0 are in Mat22 .C/ in case (i), and in Mat44 .C/ in case (ii); this matrix represents the operator A in the Fourier exponential basis of .L2 .TjSj Td //2 and is defined by the relation ! ! X X X 0 0 `;j i.`'Cj x/ D h`;j e A`0 ;j 0 h`;j ei.` 'Cj x/ ; A .`;j /2ZjSjCd
.`0 ;j 0 /2ZjSjCd
.`;j /2ZjSjCd
(4.2.26) where h`;j 2 C2 in case (4.2.17)-(i), respectively h`;j 2 C4 in case (4.2.17)-(ii). Taking in .L2 .TjSj Td //2 the basis n o ei`' ‰j .x/; 0 ; ei`' 0; ‰j .x/ j 2N
`;j
we identify A with another matrix .A`0 ;j 0 /.`;j /;.`0 ;j 0 /2ZjSj N by the relation A
X
! h`;j e
.`;j /2ZjSj N
where h`;j 2 C2 .
i`'
‰j .x/ D
X
X
! A`;j h `0 ;j 0 `;j
0
ei` ' ‰j 0 .x/ ;
.`0 ;j 0 /2ZjSj N .`;j /2ZjSj N
(4.2.27)
Normal form. We consider a '-dependent family of linear operators A.'/ acting in HS? . According to the decomposition HS? D HF ˚HG , each A.'/ can be represented by a matrix as in (4.2.1),
ŒA.'/FF ŒA.'/G F : (4.2.28) A.'/ D ŒA.'/FG ŒA.'/G G
84
4 Functional setting
We denote by …D A.'/ the operator of HS? represented by the matrix
0 DC .AFF / …D A.'/ D 0 ŒA.'/G G where, in the basis f.‰j ; 0/; .0; ‰j /gj 2F of HF , Ajj .0/ I DC AFF WD Diagj 2F C b
(4.2.29)
(4.2.30)
C W Mat2 .R/ ! MC denotes the projector on MC associated to the decomposition j (4.2.18); b Aj .0/ is the '-average Z 1 b Aj .'/ d' : Ajj .0/ WD .2/jSj TjSj j We also define …O A WD A …D A :
(4.2.31)
We remark throughout the monograph that the functions and operators may depend z ƒ in a Lipschitz way, with norm defined on a one-dimensional parameter 2 ƒ as in (1.3.4). Hamiltonian and symplectic operators. Throughout the monograph we shall preserve the Hamiltonian structure of the vector fields. Definition 4.2.1. A '-dependent family of linear operators X.'/W D .X / H ! H , defined on a dense subspace D .X / of H independent of ' 2 TjSj , is Hamiltonian if X.'/ D JA.'/ for some real linear operator A.'/ that is self-adjoint with respect to the L2 scalar product. We also say that ! @' JA.'/ is Hamiltonian. We mean that A is self-adjoint if its domain of definition D .A/ is dense in H , and .Ah; k/ D .h; Ak/, for all h; k 2 D .A/. Definition 4.2.2. A '-dependent family of linear operators ˆ.'/W H ! H , 8' 2 TjSj , is symplectic if .ˆ.'/u; ˆ.'/v/ D .u; v/;
8u; v 2 H ;
(4.2.32)
where the symplectic 2-form is defined in (3.1.8). Equivalently, ˆ .'/J ˆ.'/ D J;
8' 2 TjSj :
A Hamiltonian operator transforms into a Hamiltonian one under a symplectic transformation.
4.3 Decay norms
85
Lemma 4.2.3. Let ˆ.'/, ' 2 TjSj , be a family of linear symplectic transformations and A .'/ D A.'/, for all ' 2 TjSj . Then ˆ1 .'/ ! @' JA.'/ ˆ.'/ D ! @' JAC .'/ where AC .'/ is self-adjoint. Thus ! @' JAC .'/ is Hamiltonian. Proof. We have that ˆ1 .'/ ! @' JA.'/ ˆ.'/ D ! @' C ˆ1 .'/.! @' ˆ/.'/ ˆ1 .'/JA.'/ˆ.'/ D ! @' JAC .'/ with AC .'/ D J ˆ1 .'/ .! @' ˆ/.'/ J ˆ1 .'/JA.'/ˆ.'/ D J ˆ1 .'/ .! @' ˆ/.'/ C ˆ .'/A.'/ˆ.'/
(4.2.33)
using that ˆ.'/ is symplectic. Since A.'/ is self-adjoint, the last operator in (4.2.33) is clearly self-adjoint. In order to prove that the first operator in (4.2.33) is also selfadjoint we note that, since ˆ.'/ is symplectic, ˆ .'/J.! @' ˆ/.'/ C .! @' ˆ/ .'/J ˆ.'/ D 0 :
(4.2.34)
Thus, using that ˆ.'/ is symplectic, 1 J ˆ .'/.! @' ˆ/.'/ D .! @' ˆ/ .'/ .ˆ /1 .'/J D .! @' ˆ/ .'/ J ˆ.'/ (4.2.34)
D ˆ .'/J.! @' ˆ/.'/ D J ˆ1 .'/.! @' ˆ/.'/ :
We have proved that AC .'/ is self-adjoint.
4.3 Decay norms Let b WD jSj C d . For B Zb we introduce the subspace ( ) X s s r HB WD u D ui ei 2 H W ui 2 C ; ui D 0 if i … B ;
(4.3.1)
i 2Zb
where ei WD ei.`'Cj x/ , i D .`; j / 2 ZjSj Zd , and Hs is the Sobolev space ) ( X X jSj s s d r 2 2 2s : (4.3.2) H WD H T T I C WD u D ui ei W kuks WD jui j hi i i 2Zb
i 2Zb
86
4 Functional setting
Clearly H0 D L2 .TjSj Td I Cr /, and, for s > b=2, we have the continuous embedding Hs C 0 .TjSj Td I Cr /. For a Lipschitz family of functions f W ƒ 7! Hs , 7! f ./, we define, as in (1.3.4), kf kLip;s WD sup kf ks C 2ƒ
sup 1 ;2 2ƒ;1 ¤2
kf .2 / f .1 /ks : j2 1 j
(4.3.3)
Remark 4.3.1. In Chapter 5 we shall distinguish the components of the vector ui D .ui;a/a2I 2 Cr where I D f1; 2g if r D 2, and I D f1; 2; 3; 4g if r D 4. In this case we also write an element of Hs as X uD ui;a ei;a ; ui;a 2 C; ei;a WD ea ei ; i;a2Zb I
1 ; : : : ; 0/, a 2 I, denotes the canonical basis of Cr . where ea WD .0; : : : ; „ƒ‚… at h
s does not depend on s and will be denoted HB . When B is finite, the space HB b For B; C Z finite, we identify the space LB C of the linear maps LW HB ! HC with the space of matrices n o i0 i0 MB WD M D M ; M 2 Mat.r rI C/ (4.3.4) 0 i i 2B;i 2C i C
identifying L with the matrix M with entries 0 0 0 i ;a 2 Mat.r r; C/; Mii D Mi;a 0 a;a 2I
i 0 ;a0 Mi;a WD Lei 0 ;a0 ; ei;a 0 ;
where . ; /0 WD .2/b . ; /L2 denotes the normalized L2 scalar product. Following [24] we shall use s-decay norms that quantify the polynomial decay off the diagonal of the matrix entries. Definition 4.3.2 (s-norm). The s-norm of a matrix M 2 MB C is defined by X ŒM.n/2 hni2s ; jM j2s WD n2Zb
where hni WD max.jnj; 1/ (see (4.3.2)), 8 < sup jMii 0 j if n 2 C B ŒM.n/ WD i i 0 Dn :0 if n … C B ; where j j denotes a norm of the matrices Mat.r r; C/.
4.3 Decay norms
87
We shall use also the above definition if B or C are not finite (with the difference that jM js may be infinite). The s-norm is modeled on matrices that represent the multiplication operator. The (Toeplitz) matrix T that represents the multiplication operator Mg W Hs TjSj Td I C ! Hs TjSj Td I C ; h 7! gh ; by a function g 2 Hs .TjSj Td I C/, s s0 , satisfies jT js kgks ;
jT jLip;s kgkLip;s :
(4.3.5)
The s-norm satisfies algebra and interpolation inequalities and control the higher Sobolev norms as in (4.3.7) below: as proved in [24], for all s s0 > b=2 jABjs .s jAjs0 jBjs C jAjs jBjs0 ;
(4.3.6)
and for any subset B; C Zb , 8M 2 MB C , w 2 HB we have kM wks .s jM js0 kwks C jM js kwks0 ; kM wkLip;s .s jM jLip;s0 kwkLip;s C jM jLip;s kwkLip;s0 :
(4.3.7) (4.3.8)
The above inequalities can be easily obtained from the definition of the norms j js and the functional interpolation inequality 8u; v 2 Hs ;
kuvks .s kuks0 kvks C kuks kvks0 :
(4.3.9)
Actually, (4.3.9) can be slightly improved to obtain (see Lemma 4.5.1) kuvks C0 kuks0 kvks C C.s/kuks kvks0 ;
(4.3.10)
where only the second constant may depend on s and C0 depends only on s0 (we recall that s0 is fixed once for all). From (4.3.10) we can derive the following slight improvements of (4.3.6) and (4.3.8), which will be used in Chapter 11: jABjs C0 jAjs0 jBjs C C.s/jAjs jBjs0
(4.3.11)
and kM wkLip;s C0 jM jLip;s0 kwkLip;s C C.s/jM jLip;s kwkLip;s0 :
(4.3.12)
We also note that, denoting by A the adjoint matrix of A, we have jAjs D jA js ;
jAjLip;s D jA jLip;s :
(4.3.13)
The following lemma is the analogue of the smoothing properties of the projection operators. See Lemma 3.6 of [24], and Lemma B.1.10.
88
4 Functional setting
0 Lemma 4.3.3 (Smoothing). Let M 2 MB C . Then, 8s s 0, 0
Mii D 0;
8ji i 0 j < N
H)
0
jM js N .s s/ jM js 0 ;
(4.3.14)
and similarly for the Lipschitz norm j jLip;s . We now define the decay norm for an operator AW E ! F defined on a closed subspace E L2 with range in a closed subset of L2 . Definition 4.3.4. Let E; F be closed subspaces of L2 L2 .TjSj Td ; Cr /. Given z L2 ! L2 acting a linear operator AW E ! F , we extend it to a linear operator AW 2 ? on the whole L D E ˚ E , with image in F , by defining AzjE ? WD 0 :
(4.3.15)
Then, for s 0 we define the (possibly infinite) s-decay norm ˇ ˇ jAjs WD ˇAzˇs ; and
(4.3.16)
ˇ 1 1ˇ ˇ z m2 ˇˇ jAjC;s WD ˇDm2 AD
(4.3.17)
s
where Dm is the Fourier multiplier operator p p Dm WD C m; Dm .eij x / WD jj j2 C m eij x ;
j 2 Zd ;
(4.3.18)
and m > 0 is a positive constant. For a Lipschitz family of operators A./W E ! E, 2 ƒ, we associate the Lipschitz norms jAjLip;s , jAjLip;C;s , accordingly to (1.3.4). The norm (4.3.17) is stronger than (4.3.16), actually, since 1 .m .jj j2 1 1 C m/ 4 .jj 0 j2 C m/ 4 , we have jAjs .m jAjC;s ;
jAjLip;s .m jAjLip;C;s :
(4.3.19)
Lemma 4.3.5 (Tame estimates for composition). Let A; BW E ! E be linear operators acting on a closed subspace E L2 . Then, the following tame estimates hold: for all s s0 > .jSj C d /=2, jABjs .s jAjs0 jBjs C jAjs jBjs0 jABjLip;s .s jAjLip;s0 jBjLip;s C jAjLip;s jBjLip;s0 ;
(4.3.20) (4.3.21)
more precisely jABjLip;s C0 jAjLip;s0 jBjLip;s C C.s/jAjLip;s jBjLip;s0
(4.3.22)
4.3 Decay norms
89
and jABjC;s C jBAjC;s .s jAjs0 C 1 jBjC;s C jAjsC 1 jBjC;s0 2
(4.3.23)
2
jABjLip;C;s C jBAjLip;C;s .s jAjLip;s0 C 1 jBjLip;C;s C jAjLip;sC 1 jBjLip;C;s0 : 2 2 (4.3.24) Notice that in (4.3.23), resp. (4.3.24), the operator A is estimated in j jsC 1 norm, 2 resp. j jLip;sC 1 , and not in j jC;s , resp. j jLip;C;s . 2
Proof. Notice first that the operation introduced in (4.3.15) of extension of an operator commutes with the composition: if A; BW E ! E are linear operators acting in D Az B. z Thus (4.3.20), (4.3.21), and (4.3.22) follow by the interpolation E, then AB inequalities (4.3.6) and (4.3.11). In order to prove (4.3.23) and (4.3.24) we first show that, given a linear operator acting on the whole L2 , 1ˇ ˇ 21 ˇDm A Dm2 ˇ ; s
ˇ 12 21 ˇ ˇDm A Dm ˇ .s jAj 1 : sC s
(4.3.25)
2
We prove (4.3.25) for 1 1 1 1 Dm2 A Dm 2 D A C Dm2 ; A Dm 2 :
1
1
Since jDm 2 js C.m/, 8s, it is sufficient to prove that jŒDm2 ; Ajs .s jAjsC 1 . Since, 2
8i; j 2 Zd , ˇ ˇ ˇ.jj j2 C m/1=4 .ji j2 C m/1=4 ˇ ˇ ˇ ˇjj j ji jˇ.jj j C ji j/ D .jj j2 C m/1=4 C .ji j2 C m/1=4 .jj j2 C m/1=2 C .ji j2 C m/1=2 p jj i j jj i j ; 2 1=4 2 1=4 .jj j C m/ C .ji j C m/ 1
the matrix elements of the commutator ŒDm2 ; Aij satisfy ˇ ˇ p ˇ 12 i ˇ ˇ i ˇˇ 2 ˇ ˇ Dm ; A ˇ D ˇA ˇˇ jj j C m 1=4 ji j2 C m 1=4 ˇ jAi j jj i j j j ˇ jˇ and therefore (4.3.25) follows. We now prove the estimates (4.3.23) and (4.3.24). We consider the extended D AzBz and write operator AB 1 1 1 1 1 21 m2 D Dm2 AD z m z m2 : Dm2 BD Dm2 ABD Hence (4.3.23) and (4.3.24) follow, recalling (4.3.17), by (4.3.6) and (4.3.25).
90
4 Functional setting
By iterating (4.3.21) we deduce that there exists C.s/ 1, nondecreasing in s s0 , such that ˇ kˇ ˇA ˇ .C.s//k jAjk1 8k 1 : (4.3.26) Lip;s0 jAjLip;s ; Lip;s Lemma 4.3.6. Let A W E ! E be a linear operator acting on a closed subspace E L2 . Then its operatorial norm satisfies kAk0 .s0 jAjs0 .s0 jAjC;s0 :
(4.3.27)
Proof. Let Az be the extended operator in L2 defined in (4.3.15). Then, using Lemma z 0 .s0 jAj z s0 D 3.8 in [24] (see Lemma B.1.12), and (4.3.16), we get kAk0 kAk jAjs0 .s0 jAjCs0 by (4.3.19). We shall also use the following elementary inequality: given a matrix A 2 MB C, b where B and C are included in ŒN; N , then ˇ ˇ 1ˇ ˇ 1 jAjs . N ˇDm 2 ADm 2 ˇ : (4.3.28) s
We shall use several times the following simple lemma. Lemma 4.3.7. Let g.; '; x/; .; '; x/ be a Lipschitz family of functions in Hs .TjSj Td ; C/ for s s0 . Then the operator L defined by LŒh.'; x/ WD h.'; /; g.'; / L2 .'; x/; 8h 2 H0 .TjSj Td ; C/ ; x
satisfies
jLjLip;s .s kgkLip;s0 kkLip;s C kgkLip;s kkLip;s0 :
(4.3.29)
Proof. We write L D M P0 Mg as the composition of the multiplication operators M ; Mg for the functions ; g respectively, and the mean value projector P0 defined as Z 1 P0 h.'/ WD h.'; x/ dx; 8h 2 H0 : .2/d Td 0
j0
0
For i D .j; `/, i 0 D .j 0 ; `0 / 2 Zd ZjSj , its entries are .P0 /ii D ıj0 ı0 ı`` , and therefore jP0 jLip;s D jP0 js 1; 8s : (4.3.30) We derive (4.3.29) by (4.3.5) and (4.3.30) and the tame estimates (4.3.6) for the composition of operators.
4.3 Decay norms
91
Now, given a finite set M N, we estimate the s-decay norm of the L2 orthogonal projector …M on the subspace of L2 .Td ; R/ L2 .Td ; R/ defined by ( ) X .qj ; pj /‰j .x/; qj ; pj 2 R : (4.3.31) HM WD .q.x/; p.x// WD j 2M
In the next lemma we regard …M as an operator acting on functions h.'; x/. Lemma 4.3.8 (Off-diagonal decay of …M ). Let M be a finite subset of N. Then, for all s 0, there is a constant C.s/ WD C.s; M/ > 0 such that j…M jC;s D j…M jLip;C;s C.s/ :
(4.3.32)
In addition, setting …? M WD Id …M D …Mc , we have j…M jLip;s C.s/;
j…Mc jLip;s D j…? M jLip;s C.s/ :
(4.3.33)
1=2 ? 1=2 Proof. To prove (4.3.32) we have to estimate j…? M jLip;C;s D jDm …M Dm jLip;s . 2 For any h D .h.1/ ; h.2/ / 2 L2 .TjSj Td / we have
0 1 1 .1/ 2 1 .'; /; D ‰ h XB m j 1 1 2 2 Lx C Dm2 …? @ A Dm2 ‰j : 1 M Dm h .'; x/ D h.2/ .'; /; Dm2 ‰j 2 j 2M
(4.3.34)
Lx
Now, using Lemma 4.3.7, the fact that each ‰j .x/ is in C 1 and M is finite, we deduce, by (4.3.34), the estimate (4.3.32). The first estimate in (4.3.33) is trivial because j…M jLip;s . j…M jLip;C;s . The second estimate in (4.3.33) follows by …M C …? M D Id and jIdjLip;s D 1. Lemma 4.3.9. Given a '-dependent family of linear operators A.'/ acting in HS? , the operators …D A and …O A defined respectively in (4.2.29) and (4.2.31) satisfy j…D AjLip;C;s C j…O AjLip;C;s C.s/jAjLip;C;s :
(4.3.35)
Proof. We consider the extension of the operator …D A defined in (4.2.29) and (4.2.30). 2 This extension, acting on L2 .TjSj Td / , is defined as A1 CA2 where the operators A1 and A2 are (4.3.36) A1 WD …G A…G ; 2 and, for any h D .h.1/ ; h.2/ / 2 L2 .TjSj Td / , ! .1/ X h .'; /; ‰j L2 j b x ‰j .x/ : .A2 h/.'; x/ WD C Aj .0/ .2/ (4.3.37) h .'; /; ‰j L2 j 2F
x
92
4 Functional setting
j We remark that, for j 2 F, b Aj .0/ is the 2 2 matrix
.A.‰j ; 0/; .‰j ; 0//0 .A.0; ‰j /; .‰j ; 0//0 j b ; Aj .0/ D .A.‰j ; 0/; .0; ‰j //0 .A.0; ‰j /; .0; ‰j //0
(4.3.38)
2 where . ; /0 is the (normalized) L2 scalar product in L2 .TjSj Td / . By Definition 4.3.4, we have j…D AjLip;C;s D jA1 C A2 jLip;C;s . By (4.3.36), (4.3.24), and (4.3.33) (with Mc D G) we get jA1 jLip;C;s .s j…G j2Lip;sC 1 jAjLip;C;s .s jAjLip;C;s :
(4.3.39)
2
For A2 , we apply Lemma 4.3.7. Using that, by (4.3.37), 0 1 1 .1/ 2 1 .'; /; D ‰ h X m j 1 1 2 B Lx C C b Ajj .0/ @ Dm2 A2 Dm2 h .'; x/ D A Dm2 ‰j 1 h.2/ .'; /; Dm2 ‰j 2 j 2F Lx
with b Ajj .0/ given in (4.3.38), that ‰j 2 C 1 .Td / and F is finite, we deduce by (4.3.29) that ˇ 1 1ˇ ˇ ˇ jA2 jLip;C;s D ˇDm2 A2 Dm2 ˇ .s max kAkLip;0 : (4.3.40) Lip;s
j 2F
Finally (4.3.39) and (4.3.40) imply j…D AjLip;C;s jA1 jLip;C;s C jA2 jLip;C;s C.s/jAjLip;C;s and j…O AjLip;C;s D jA …D AjLip;C;s jAjLip;C;s C j…D AjLip;C;s C.s/jAjLip;C;s :
Thus (4.3.35) is proved.
We now mention some norm equivalences and estimates that shall be used in the sequel. Let us consider a '-dependent family of operators .'/ 2 L.HS? / that, according to the splitting (4.2.1), have the form
.'/ 2 .'/ 2 L.HS? /; 1 .'/ 2 L.HF /; 2 .'/ 2 L.HF ; HG / :
.'/ D 1
2 .'/ 0 Recalling (4.2.3), we can identify each l .'/, l D 1; 2, with ( L.Hj ; HF /
l .'/ D . l;j .'//j 2F ; where l;j .'/ WD . l /jHj 2 L.Hj ; HG /
if l D 1 if l D 2 :
4.3 Decay norms
93
Moreover, according to (4.2.7), we can identify each operator l;j .'/ with the vector ( HF HF if l D 1 .1/ .2/ .3/ .4/ . l;j .'/; l;j .'/; l;j .'/; l;j .'// 2 (4.3.41) HG HG if l D 2 ; where
.1/
l;j
!
.2/
l;j
WD l
‰j ; 0
.3/
l;j
!
.4/
l;j
WD l
0 ‰j
:
(4.3.42)
We define the Sobolev norms of each l .'/, l D 1; 2, as ! 4 X .k/ k l k2s D max k l;j k2s ; j 2F
(4.3.43)
kD1
and k l kLip;s according to (1.3.4). Lemma 4.3.10. We have j 1 jLip;C;s s j 1 jLip;s s k 1 kLip;s ; j 2 jLip;C;s .s j 2 jLip;sC 1 ; j 2 jLip;s s k 2 kLip;s :
(4.3.44) (4.3.45)
2
Proof. Step 1. We first justify that, for l D 1; 2, j l jLip;s s k l kLip;s :
(4.3.46)
By Definition 4.3.4, the s-norm j l jLip;s D j l …F jLip;s . Now, recalling (4.3.42), the extended operator l …F has the form, for any h.'/ D .h.1/ .'/; h.2/ .'// 2 H , X h.1/ .'/; ‰j L2 . l …F h/.'/ D
! .1/
l;j .'/
C h.2/ .'/; ‰j L2
! .3/
l;j .'/
: .4/
l;j .'/ (4.3.47) Using that ‰j 2 C 1 .Td /, for all j 2 F, and that F is finite, we derive, from (4.3.47) and Lemma 4.3.7, the estimate j jLip;s .s k l kLip;s . The reverse inequality k l kLip;s .s j l jLip;s follows by (4.3.42) and (4.3.8), and the fact that ‰j 2 C 1 .Td /. This concludes the proof of (4.3.46) and therefore of the second equivalences in (4.3.44) and (4.3.45). j 2F
x
.2/
l;j .'/
x
Step 2. We now prove that j 1 jLip;C;s .s k 1 kLip;s :
(4.3.48)
94
4 Functional setting 1
1
By Definition 4.3.4, the norm j 1 jLip;C;s D jDm2 1 …F Dm2 jLip;s and, by (4.3.47), 1 0 1 1 2 .1/ X 1 1 D
.'/ m A h.1/ .'/; Dm2 ‰j 2 @ 1 1;j Dm2 1 …F Dm2 h .'/ D .2/ Lx .'/ Dm2 1;j j 2F 1 0 1 2 .3/ X 1 Dm .'/A : (4.3.49) C h.2/ .'/; Dm2 ‰j 2 @ 1 1;j .4/ Lx .'/ Dm2 1;j j 2F Now, applying Lemma 4.3.7, we deduce by (4.3.49) and the fact that ‰j are C 1 (and independent of ) and F is finite, that 1 ˇ 1 1ˇ ˇ ˇ .k/ j 1 jLip;C;s D ˇDm2 1 …F Dm2 ˇ .s max Dm2 1;j : (4.3.50) Lip;s
j 2F;kD1;:::;4
Lip;s
.k/
Now, by (4.3.41), each 1;j , k D 1; : : : ; 4, is a function of the form u.'/ D X 1 j 2 Fuj .'/‰j . We claim that for functions of this form, kDm2 ukLip;s s kukLip;s . Indeed X kuj kLip;H s .TjSj / ; kukLip;s s j 2F
because F is finite and ‰j is C 1 (and independent of ). Similarly, 1 X X 1 2 D uj .'/Dm2 ‰j
s kuj kLip;H s .TjSj / ; Dm u Lip;s j 2F
Lip;s
j 2F
.k/ we deduce, by (4.3.50), and the claim follows. Applying this property to u D 1;j that .k/ j 1 jLip;C;s .s max 1;j Lip;s j 2F;kD1;:::;4
and thus (4.3.48). Finally, by (4.3.46) and (4.3.19) we have k 1 kLip;s .s j 1 jLip;C;s and thus we deduce the first equivalence in (4.3.44). Step 3. We finally check j 2 jLip;C;s .s j 2 jLip;sC 1 ;
(4.3.51)
2
which is the first estimate in (4.3.44). We have (4.3.24)
(4.3.32)
j 2 jLip;C;s D j 2 …F jLip;C;s .s j 2 jLip;sC 1 j…F jLip;C;s .s j 2 jLip;sC 1 2
(applied with M D F).
2
4.3 Decay norms
95
We also revisit a standard perturbation lemma for operators that admit a right inverse. Definition 4.3.11 (Right inverse). A matrix M 2 MB C has a right inverse, that we 1 C 1 denote by M 2 MB , if MM D IdC . Note that M has a right inverse if and only if M (considered as a linear map) is surjective. The following lemma is proved as in Lemma 3.9 of [24] (see also Lemma B.1.14) by a Neumann series argument. Lemma 4.3.12. There is a constant c0 > 0 such that, for any C; B Zb , 1 B for any M 2 MB 2 MC C having a right inverse M B , for any P in MC with jM 1 js0 jP js0 c0 the matrix M C P has a right inverse that satisfies ˇ ˇ ˇ ˇ ˇ.M C P /1 ˇ 2ˇM 1 ˇ s0 s0 (4.3.52) ˇ ˇ ˇ ˇ 1 ˇ ˇ 1 ˇ.M C P / ˇ .s ˇM ˇ C ˇM 1 ˇ2 jP js ; 8s s0 : s s s 0
Finally, we report the following lemma (related to Lemma 2.1 of [26]), which will be used in Chapter 11. Lemma 4.3.13. Let E be a closed subspace of H0 D L2 .TjSj Td / and let E s WD E \ Hs for s s0 . Let R W E ! E be a linear operator, satisfying, for some s s1 s0 , ˛ 0, 8v 2 E s1 ; 8v 2 E s ;
kRvks1 kRvks
1 kvks1 2
1 kvks C ˛kvks1 : 2
Then Id C R is invertible as an operator of E s1 , and 8v 2 E s1 ; .Id C R/1 v s 2kvks1 1 s 1 .Id C R/ v s 2kvks C 4˛kvks1 : 8v 2 E ;
(4.3.53) (4.3.54)
(4.3.55) (4.3.56)
z and satisfies also Moreover, assume that R depends on the parameter 2 ƒ 8v./ 2 E s1 ; 8v./ 2 E s ;
kRvkLip;s1 kRvkLip;s
1 kvkLip;s1 2
1 kvkLip;s C ˛kvkLip;s1 : 2
(4.3.57) (4.3.58)
Then 8v./ 2 E s1 ; 8v./ 2 E s ;
k.Id C R/1 vkLip;s1 2kvkLip;s1 k.Id C R/1 vkLip;s 2kvkLip;s C 4˛kvkLip;s1 :
(4.3.59) (4.3.60)
96
4 Functional setting
Proof. Since E is a closed subspace of H0 , each space E s D E \ Hs , s 0, is complete. By (4.3.53) the operator Id C R is invertible in E s1 and (4.3.55) holds. In order to prove (4.3.56), let k D .Id C R/1 v so that k D v Rk. Then kkks kvks C kRkks
1 kvks C kkks C ˛kkks1 2 (4.3.55) 1 kvks C kkks C ˛2kvks1 2 (4.3.54)
and we deduce kkks 2.kvks C ˛2kvks1 /, which is (4.3.56). The proof of (4.3.59) and (4.3.60), when we assume (4.3.57) and (4.3.58), follows the same line.
4.4 Off-diagonal decay of
p C V.x/
The goal of this section is to provide a direct proof of the fact that the matrix that represents the operator p DV D C V .x/ ; defined in (3.1.10), in the exponential basis feij x gj 2Zd has off-diagonal decay. This is required by the multiscale p analysis performed in Chapter 5. We compare DV to the Fourier multiplier Dm D C m defined in (4.3.18). Proposition 4.4.1 (Off-diagonal decay of DV Dm ). There exists a positive constant ‡s WD ‡s .kV kC ns /, where ns WD ds C d2 e C 1 2 N, such that ˇ 1 ˇ ˇ 12 ˇˇ ˇ 2 ˇDV Dm ˇ D D (4.4.1) D ˇDm V m Dm ˇ ‡ s : C;s s
The rest of this section is devoted to the proof of Proposition 4.4.1. In order to prove that the matrix .Aji /i;j 2Zd , which represents in the exponential basis feij x gj 2Zd a linear operator A acting on a dense subspace of L2 .Td /, has polynomial offdiagonal decay, we shall use the following criterium. Let Ad@xk WD Œ@xk ; denote the commutator with the partial derivative @xk . Since the operator Adn@x A is k represented by the matrix n i .ik jk /n Aji i;j 2Zd ; it is sufficient that A and the operators Adn@x A, k D 1; : : : ; d , for n large enough, k
extend to bounded operators in L2 .Td /. We shall use several times the following abstract lemma.
4.4 Off-diagonal decay of
p
C V .x/
97 1
Lemma 4.4.2. Let .H; h ; i/ be a separable Hilbert space with norm kuk WD hu; ui 2 . Let BW D .B/ H ! H be an unbounded symmetric operator with a dense domain of definition D .B/ H , satisfying: (H1) There is ˇ > 0 such that hBu; ui ˇkuk2 , 8u 2 D .B/. (H2) B is invertible and B 1 2 L.H / is a compact operator. Let AW D .A/ H ! H be a symmetric linear operator, such that: 1
1
(H3) D .A/, respectively in addition D .B 2 AB 2 /, contains D .B p / for some p 1. Moreover we assume that: (H4) There is 0 such that jhAu; Buij kuk2 , 8u 2 D .A/ \ D .B/. 1
1
Then A, respectively B 2 AB 2 , can be extended to a bounded operator of H (still 1 1 denoted by A, respectively B 2 AB 2 ) satisfying 1 1 kAkL.H / =ˇ; respectively B 2 AB 2 : (4.4.2) L.H /
Proof. The operator B 1 2 L.H / is compact and symmetric, and therefore there is an orthonormal basis . k /k1 of H of eigenfunctions of B 1 , i.e., B 1 k D k k , with eigenvalues k 2 Rnf0g, .k / ! 0. Each k is an eigenfunction of B, i.e., B
k
D k
k
with eigenvalue k WD 1 k ;
.k / ! 1 :
By assumption (H1), each k ˇ > 0. Clearly each eigenfunction domain D .B p / of B p for any p 1. For any N 1, we consider the N -dimensional subspace EN WD Span.
1; : : : ;
k
belongs to the
N/
of H , and we denote by …N the corresponding orthogonal projector on EN . We have that EN D .B p / for any p 1, and therefore assumption (H3) implies that EN D .A/. Proof of kAkL.H / =ˇ. The operator AN WD …N AjEN is a symmetric operator on the finite-dimensional Hilbert space .EN ; h ; i/ and ˚ kAN kL.EN / D max jjI eigenvalue of AN : (4.4.3) Let be an eigenvalue of AN and u 2 EN nf0g be an associated eigenvector, i.e., AN u D u. Since B.EN / EN , the vector Bu is in EN , and we have hu; Bui D hAN u; Bui D h…N Au; Bui D hAu; Bui :
(4.4.4)
Since hu; Bui is positive by (H1), by (4.4.4) and using assumption (H4) (note that u is in EN D .A/ \ D .B/), we get jjhu; Bui D jhAu; Buij kuk2 ;
98
4 Functional setting
and, by assumption (H1), we deduce that jj =ˇ. By (4.4.3) we conclude that, for any N 1, kAN kL.EN / =ˇ : (4.4.5) [ EN of D .A/, we deduce by (4.4.5) that Defining the subspace E WD N 1
8.u; v/ 2 E E;
hAu; vi ˇ 1 kukkvk :
(4.4.6)
Moreover, since E is dense in H (the . k /k1 form an orthonormal Hilbert basis of H ), the inequality (4.4.6) holds for all .u; v/ 2 E H , in particular for all .u; v/ 2 E D .A/. Therefore, since A is symmetric, we obtain that 8u 2 E; 8v 2 D .A/;
hu; Avi ˇ 1 kukkvk :
(4.4.7)
By the density of E in H , the inequality (4.4.7) holds for all .u; v/ 2 H D .A/, and we conclude that 8v 2 D .A/; kAvk ˇ 1 kvk : (4.4.8) By continuity and (4.4.8), the operator A can be extended to a bounded operator on the closure D .A/ D H (that we still denote by A) with operatorial norm kAkL.H /
=ˇ, proving the first estimate in (4.4.2). 1
1
1
Proof of kB 2 AB 2 kL.H / . The linear operator B 2 is defined on the basis . by setting p 1 B 2 k WD k k :
k /k1
Notice that, since in assumption (H3) we also require that, for some p 1, we 1 1 1 1 have D .B p / D .B 2 AB 2 /, we have the inclusion EN D .B 2 AB 2 /. Since 1 1 B 2 .EN / EN , and the operators B 2 , A are symmetric on EN D .A/, then 1
1
2 is a symmetric operator of EN . Let 0 be an eigenvalue of A0N WD B 2 …N ABjE N A0N and u0 2 EN nf0g be an associated eigenvector, i.e., A0N u0 D 0 u0 . Using that 1 y WD B 2 u0 2 EN and By 2 EN , we obtain (recall that AN D …N AjEN ) D 1 E E D 1 1 1 0 hu0 ; Bu0 i D B 2 AN B 2 u0 ; Bu0 D B 2 AN y; B 2 y
D hAN y; Byi D hAy; Byi :
(4.4.9)
Since hu0 ; Bu0 i is positive by (H1), by (4.4.9) and assumption (H4) (note that u0 is in EN D .A/ \ D .B/), we get E D 1 1 (4.4.10) j0 jhu0 ; Bu0 i kyk2 D B 2 u0 ; B 2 u0 D hu0 ; Bu0 i : Since hu0 ; Bu0 i > 0 by (H1), we deduce by (4.4.10) that j0 j and thus 1 1 1 kA0N kL.EN / . Since B 2 and …N commute and EN D .B 2 AB 2 /, we have A0N D …N .A0 /jEN ;
1
1
A0 WD B 2 AB 2 ;
4.4 Off-diagonal decay of
p
C V .x/
99
and arguing by density as to prove (4.4.8) for A, we deduce that for all v 2 D .A0 /, kA0 vk kvk. Hence A0 can be extended to a bounded linear operator of H with norm kA0 kL.H / . This proves the second estimate in (4.4.2). In the sequel we shall apply Lemma 4.4.2 with Hilbert space H D L2 .Td / and an operator B 2 fB1 ; B2 ; B3 g among B1 WD Dm ;
B2 WD DV ;
B3 WD Dm C DV ;
(4.4.11)
with dense domain D .Bi / WD H 1 Td : Notice that each operator Bi , i D 1; 2; 3, satisfies assumption (H1) (recall (1.1.3)) and Bi1 sends continuously L2 .Td / into H 1 .Td / (note that kDV ukL2 ' kukHx1 by
(4.1.3) and (4.1.4)). Moreover, since H 1 .Td / is compactly embedded into L2 .Td /, each Bi1 is a compact operator of H D L2 .Td /, and hence assumption (H2) also holds. Lemma 4.4.3 (L2 -bounds of DV Dm ). Consider the linear operators A0 WD DV Dm ; 1 2
(4.4.12) 1 2
A00 WD Dm .DV Dm /Dm
(4.4.13)
with domains D .A0 / WD H 1 .Td /, D .A00 / WD H 2 .Td /. Then A0 and A00 can be extended to bounded linear operators of L2 .Td / satisfying (4.4.14) kA0 kL.L2 / ; kA00 kL.L2 / C kV kL1 : 2 D C m and DV2 D C V .x/, we have Proof. Since Dm 2 D Op.V .x/ m/ ; DV .DV Dm / C .DV Dm /Dm D DV2 Dm
where Op.a/ denotes the multiplication operator by the function a.x/. Hence, for all u 2 H 2 .Td /, ˇ ˇ ˇ ˇ ˇ.DV .DV Dm /u; u/L2 C ..DV Dm /Dm u; u/L2 ˇ D ˇ..V .x/ m/u; u/L2 ˇ kV kL1 C m kuk2L2 ; which gives, by the symmetry of DV and DV Dm that ˇ ˇ 8u 2 H 2 .Td /; ˇ..DV Dm /u; .DV C Dm /u/L2 ˇ kV kL1 C m kuk2L2 :
(4.4.15)
100
4 Functional setting
Actually (4.4.15) holds for all u 2 H 1 .Td /, by the density of H 2 .Td / in H 1 .Td / and the fact that DV and Dm send continuously H 1 .Td / into L2 .Td /: setting A0 D DV Dm as in (4.4.12) and B3 D DV C Dm as in (4.4.11), we have ˇ ˇ (4.4.16) 8u 2 H 1 Td ; ˇ.A0 u; B3 u/L2 ˇ kV kL1 C m kuk2L2 : Thus the assumption (H4) of Lemma 4.4.2 holds with A D A0 , B D B3 , Hilbert space H D L2 .Td / and D kV kL1 C m. Applying Lemma 4.4.2 to A0 and B3 (also assumption (H3) holds since D .A0 / D D .B3 / D H 1 .Td /) we conclude that A0 can be extended to a bounded operator of L2 .Td / with norm (4.4.17) kA0 kL.L2 / C kV kL1 (depending also on the constants m and ˇ > 0 in (1.1.3)). This proves the first bound in (4.4.14). Then, recalling the definitions of A0 ; B1 ; B3 in (4.4.12) and (4.4.11), using (4.4.16) and (4.4.17), we also deduce that 8u 2 H 1 Td ;
1 j.A0 u; B1 u/L2 j D j.A0 u; .B3 A0 /u/L2 j C 0 kV kL1 kuk2L2 : 2
Thus the assumption (H4) of Lemma 4.4.2 holds with A D A0 , B D B1 . Assumption (H3) also holds since 1 1 D .A0 / D D .B1 / D H 1 Td and D B12 A0 B12 H 2 Td D D B12 : 1
1
Therefore Lemma 4.4.2 implies that A00 D B12 A0 B12 can be extended to a bounded operator of L2 .Td /, with operatorial norm kA00 kL.L2 / C.kV kL1 /. This proves the second bound in (4.4.14). Let ŒA00 ji D .ji j2 C m/1=4 ŒDV Dm ji .jj j2 C m/1=4 , i; j 2 Zd , denote the 1
1
elements of the matrix representing the operator A00 D Dm2 .DV Dm /Dm2 in the exponential basis. By Lemma 4.4.3 we have that ˇ 0 jˇ ˇŒA ˇ kA0 kL.L2 / C kV kL1 ; 8i; j 2 Zd : (4.4.18) 0 i 0 In order to also prove a polynomial off-diagonal decay for ŒA00 ji , i ¤ j , we note that, for n 1, in .ik jk /n ŒA00 ji (4.4.19) Adn@x A00 is represented by the matrix d i;j 2Z
k
and then prove that Adn@x A00 extends to a bounded operator in L2 .Td /. k
4.4 Off-diagonal decay of
p C V .x/
101
Lemma 4.4.4 (L2 -bounds of Adn@x DV ). For any n 1, k D 1; : : : ; d the operators Adn@x DV k
k 1
1 2
and Dm .Adn@x DV /Dm2 k
can be extended to bounded operators of L2 .Td /: there exist positive constants Cn and Cn0 depending on kV kC n such that n Ad DV 2 Cn ; (4.4.20) @xk L.L / 1 1 2 (4.4.21) Dm Adn@x DV Dm2 2 Cn0 : k
L.L /
Proof. We shall use the following algebraic formulas: given linear operators L1 , L2 we have Ad@xk .L1 L2 / D Ad@xk L1 L2 C L1 Ad@xk L2 ; (4.4.22) n X n n 1 : (4.4.23) Adn@x1 L1 Adnn L Ad@x .L1 L2 / D 2 @xk n1 k k n1 D0
We split the proof in two steps. 1st step. We prove by iteration that, for all n 1, there are constants Cn ; Cn00 > 0 such that (4.4.24) Adn@x DV DV C DV Adn@x DV 2 Cn00 ; k k L.L / n (4.4.25) Ad@x DV 2 Cn : k
L.L /
Clearly the estimates (4.4.25) are (4.4.20). Initialization: proof of (4.4.24) and (4.4.25) for n D 1. Applying (4.4.22) with L1 D L2 D DV and since DV2 D C V .x/, we get Ad@xk DV DV C DV Ad@xk DV D Ad@xk . C V .x// D Op.Vxk .x// ;
(4.4.26)
where Op.Vxk / is the multiplication operator by the function Vxk .x/ WD .@xk V /.x/. Hence (4.4.24) for n D 1 holds with C100 D kV kC 1 . In order to prove (4.4.25) for n D 1 we apply Lemma 4.4.2 to the operators A1 WD Ad@xk DV and B2 D DV .
Assumption (H3) holds because D .A1 / D H 2 .Td / D D .B22 /. Note that because of the L2 antisymmetry of the operator @xk , if L is L2 symmetric then so is Ad@xk L. Hence A1 D Ad@xk DV is symmetric. Assumption (H4) also holds because 8u 2 H 2 Td ; ˇ ˇ ˇ ˇ (4.4.27) j.A1 u; B2 u/L2 j D ˇ Ad@xk DV u; DV u 2 ˇ L ˇ ˇ 1ˇ ˇ D ˇ Ad@xk DV DV C DV Ad@xk DV u; u 2 ˇ L 2 C 00 1 kuk2L2 2
102
4 Functional setting
by (4.4.24) for n D 1. Therefore Lemma 4.4.2 implies that (4.4.25) holds for n D 1, for some constant C1 depending on kV kC 1 . Iteration: proof of (4.4.24) and (4.4.25) for n > 1. Assume by induction that (4.4.24) and (4.4.25) have been proved up to rank n 1. We now prove them at rank n. Applying (4.4.23) with L1 D L2 D DV and since DV2 D C V .x/, we get n1 X n 1 D Adn@x DV DV C DV Adn@x DV C Adn@x1 DV Adnn V @xk n1 k k k n1 D1 (4.4.28) D Adn@x . C V .x// D Op @nxk V : k
Let
Tn WD Adn@x DV DV C DV Adn@x DV : k
k
By (4.4.28) and using the inductive assumption (4.4.25) at rank n 1, we obtain kTn kL.L2 .Td // kV kC n C
n1 X n1 D1
Cn00
n n1 nn Ad@x DV 2 Ad@x 1 DV 2 n1 k k L.L / L.L /
;
(4.4.29)
where the constant Cn00 depends on kV kC n . We have that 8u 2 H nC1 Td ;
ˇ ˇ ˇ Adn@x DV u; DV u
L
k
ˇ ˇ ˇ D 2 (4.4.29)
ˇ 1 ˇˇ .Tn u; u/L2 ˇ 2 Cn00 kuk2L2 : 2
(4.4.30)
We now apply Lemma 4.4.2 to An WD Adn@x DV and B2 D DV . Assumption (H3) k
holds because D .An / D H nC1 .Td / D D .B2nC1 / by (4.1.4). Assumption (H4) holds by (4.4.30). Therefore the first inequality in (4.4.2) implies that kAn kL.L2 .Td // Cn .kV kC n /. This proves (4.4.25) at rank n. 2nd step. Proof of (4.4.21) for any n 1. In order to prove that the operator 1 1 Dm2 Adn@x DV Dm2 extends to a bounded operator of L2 .Td / we use Lemma 4.4.2 k with A D An D Adn@x DV and B D Dm . Assumption (H3) holds because k
nC1 D .An / D H nC1 Td D D .Dm / and 1 1 nC2 D Dm2 An Dm2 H nC2 Td D D Dm :
4.4 Off-diagonal decay of
p C V .x/
103
Also, assumption (H4) holds because, by (4.4.30), and the fact that An and Dm DV are bounded operators of L2 .Td / by (4.4.25) and Lemma 4.4.3, we get 8u 2 H nC1 Td ; j.An u; Dm u/L2 j j.An u; DV u/L2 j C j.An u; Dm u DV u/L2 j n kuk2L2 ; where n depends on kV kC n . (4.4.21).
Then the second inequality in (4.4.2) implies
Proof of Proposition 4.4.1 concluded. Recalling (4.4.18), the entries of the matrix 1
1
.ŒA00 i /i;j 2Zd , which represents the operator A00 WD Dm2 .DV Dm /Dm2 in the exponential basis, are bounded. Furthermore, since each @xk , k D 1; : : : ; d , commutes j
1
with Dm and Dm2 , we have that, for any n 1, 1 1 Adn@x A00 D Dm2 Adn@x DV Dm2 : k
k
By Lemma 4.4.4, this operator can be extended to a bounded operator on L2 .Td / satisfying (see (4.4.21)) n (4.4.31) Ad@x A00 2 d Cn0 kV kC n : k
L.L .T //
By (4.4.18), (4.4.19), and (4.4.31) we deduce that 8n 0; 8k D 1; : : : ; d; 8.i; j / 2 Zd Zd ; j jik jk jn jŒA00 i j Adn@x A00 2 d Cn0 kV kC n k
with the convention Ad0@x
k
sC
L.L .T //
WD Id. Hence, for s 0 and ns WD ds C
d C 1, we deduce that the s-decay norm of A00 satisfies 2 X X Kn2s 1 jA00 j2s K02 C j`j2s Kn2s DW ‡s2 2n d C2 s j`j h`i d d `2Z nf0g
d eC1 2
`2Z
for some constant ‡s depending on s and kV kC ns only. The proof of Proposition 4.4.1 is complete. With the same methods we also obtain the following propositions. Proposition 4.4.5. There exists a positive constant C.s; kV kC ns /, where ns WD ˙ d sC C 1 2 N, such that 2 ˇ 1 1ˇ ˇ ˇ 2 ˇDV Dm2 ˇ C s; kV kC ns : s
104
4 Functional setting
Proof. Since the proof closely follows the proof of Proposition 4.4.1 we shall indicate 1
1
just the main steps. One first proves that DV2 Dm2 can be extended to a bounded operator of L2 .Td /, arguing as in Lemma 4.4.3. Writing 1 1 1 1 1 1 DV2 DV2 Dm2 C DV2 Dm2 Dm2 D DV Dm ; 1
1
1
using the symmetry of DV2 and DV2 Dm2 , the fact that DV Dm is bounded on 1
L2 .Td / by Lemma 4.4.3 (see (4.4.14)), and the density of H 1 in H 2 , we deduce that ˇ
ˇ 1 1 1 1 ˇ ˇ 1 d 2 2 2 2 ˇ C.kV kL1 /kuk2 2 : 8u 2 H 2 .T /; ˇˇ DV Dm u; DV C Dm u ˇ L L2
1
1
Then, arguing as in Lemma 4.4.3, Lemma 4.4.2 implies that DV2 Dm2 can be extended to a bounded operator of L2 .Td / satisfying 1 1 2 2 1 : D C kV k (4.4.32) DV L m 2 L.L /
1
Next, one proves that for all n 1, k D 1; : : : ; d , the operators Adn@x DV2 can be extended to bounded operators on L2 .Td / satisfying 1 n Ad@x DV2 2 Cn kV kC n : k
L.L /
k
(4.4.33)
It is sufficient to argue as in Lemma 4.4.4, applying (4.4.22) and (4.4.23) to L1 D 1
1
1
L2 D DV2 and DV2 DV2 D DV , and using that Adn@x DV are bounded operators of k
L2 .Td / satisfying (4.4.20). The proposition follows (4.4.32) and (4.4.33) as in the conclusion of the proof of Proposition 4.4.1. Proposition 4.4.6. There exists ˙ d sC C 1 2 N, such that 2 ˇ 1 ˇ ˇ 2 21 ˇ ˇDm DV ˇ ; s
a positive constant C.s; kV kC ns /, where ns WD ˇ ˇ ˇ 21 21 ˇ ˇDV Dm ˇ C.s; kV kC ns / : s
Proof. Writing 1 1 1 1 1 DV 2 Dm2 D Id DV 2 DV2 Dm2 ; 1 1 1 1 1 Dm2 DV 2 D Id DV2 Dm2 DV 2 ;
(4.4.34)
105
4.5 Interpolation inequalities
the estimates (4.4.34) follow by Proposition 4.4.5, (4.3.6), and ˇ 12 ˇ ˇD ˇ C s; kV kC ns : V s
(4.4.35)
The bound (4.4.35) follows, as in the conclusion of the proof of Proposition 4.4.1, by 1
the fact that DV 2 is a bounded operator of L2 .Td / and 1 n Ad@x DV 2 2 Cn kV kC n ; 8n 1; k D 1; : : : ; d : L.L /
k
(4.4.36)
The estimates (4.4.36) can be proved by applying (4.4.22) and (4.4.23) to L1 D L2 D 1
1
1
DV2 and DV2 DV 2 D Id. For example, applying (4.4.22), we get 1 1 1 1 Ad@xk DV 2 D DV 2 Ad@xk DV2 DV 2 ; 1
which is a bounded operator on L2 .Td / satisfying (4.4.33), and DV 2 2 L.L2 /. By iteration, the estimate (4.4.36) follows for any n 1.
4.5 Interpolation inequalities We conclude this chapter with useful interpolation inequalities for the Sobolev spaces Hs defined in (4.3.2). For any s s0 > .jSj C d /=2, we have the tame estimate for the product kfgkLip;s .s kf kLip;s kgkLip;s0 C kf kLip;s0 kgkLip;s :
(4.5.1)
Actually we directly prove the improved tame estimate (4.5.2) below, used in [24] and [26]. Lemma 4.5.1. Let s s0 > .jSj C d /=2. Then kuvkLip;s C0 kukLip;s kvkLip;s0 C C.s/kukLip;s0 kvkLip;s
(4.5.2)
where the constant C0 > 0 is independent of s s0 . Proof. Denoting y WD .'; x/ 2 Tb , b D jSj C d , we expand in Fourier series X X u.y/ D um eimy ; v.y/ D vm eimy : m2Zb
m2Zb
Thus
ˇ ˇ2 !2 ˇ X ˇˇ X X X ˇ uk vmk ˇ hmi2s juk jjvmk j hmi2s kuvk2s D ˇ ˇ ˇ b b b b m2Z
k2Z
m2Z
k2Z
S1 C S2
(4.5.3)
106
4 Functional setting
where X
X
m2Zb
jmj hmi=2 > hm ki and, since s0 s 0, hmis 2s hkis D 2s
hkis0 hkis0 s 2 ; hkis0 s hm kis0 s
which implies (4.5.8). By (4.5.8) we deduce that for any m 2 Zb , X k2Zb
X X hmi2s 1 1 . C C.s0 / s 2s0 hki2s0 hm ki2s hki hm ki2s0 b b k2Z
k2Z
and therefore, by (4.5.7), we obtain kuvks C.s0 /kuks0 kvks : Recalling (4.3.3), the estimate (4.5.6) is a consequence of (4.5.9).
(4.5.9)
As in any scale of Sobolev spaces with smoothing operators, the Sobolev norms k ks defined in (4.3.2) admit an interpolation estimate. Lemma 4.5.3. For any s1 < s2 , s1 ; s2 2 R, and 2 Œ0; 1, we have khkLip;s 2khkLip;s1 khk1 Lip;s2 ;
s WD s1 C .1 /s2 :
(4.5.10)
108
4 Functional setting
Proof. Recalling the definition of the Sobolev norm in (4.3.2), we deduce, by the H¨older inequality, X X khk2s D jhi j2 hi i2s D .jhi j2 hi i2s1 / .jhi j2 hi i2s2 /1 i 2Zb
i 2Zb
X
D
! jhi j2 hi i2s1
i 2Zb 2.1 / khk2 s1 khks2
X
!1 jhi j2 hi i2s2
i 2Zb
:
Thus khks khks1 khk1 s2 , and for a Lipschitz family of Sobolev functions, see (4.3.3), the inequality (4.5.10) follows. As a corollary we deduce the following interpolation inequality. Lemma 4.5.4. Let ˛ a b ˇ such that a C b D ˛ C ˇ. Then khkLip;a khkLip;b 4khkLip;˛ khkLip;ˇ :
(4.5.11)
Proof. Write a D ˛ C .1 /ˇ and b D ˛ C .1 /ˇ, where D
aˇ ; ˛ˇ
D
bˇ ; ˛ˇ
CD1:
By the interpolation Lemma 4.5.3, we get khkLip;a 2khkLip;˛ khk1 Lip;ˇ ;
1 khkLip;b 2khk Lip;˛ khkLip;ˇ ;
and (4.5.11) follows by multiplying these inequalities.
We finally recall the following Moser tame estimates for the composition operator .u1 ; : : : ; up / 7! f.u1 ; : : : ; up /.'; x/ WD f .'; x; u1 .'; x/; : : : ; up .'; x// (4.5.12) induced by a smooth function f . Lemma 4.5.5 (Composition operator). Let f 2 C 1 .TjSj Td Rp ; R/. Fix s0 > .d C jSj/=2, s0 2 N. Given real-valued functions ui , 1 i p, satisfying kui kLip;s0 1, then, 8s s0 , ! p X kf.u1 ; : : : ; up /kLip;s C.s; f / 1 C kui kLip;s : (4.5.13) i D1
Assuming also that kvi kLip;s0 1, 1 i p, then kf.v1 ; : : : ; vp / f.u1 ; : : : ; up /kLip;s .s;f
p X i D1
kvi ui kLip;s C
p X i D1
kui kLip;s C kvi kLip;s
!
(4.5.14) p X i D1
kvi ui kLip;s0 :
109
4.5 Interpolation inequalities
Proof. Let y WD .'; x/ 2 Tb , b D jSj C d . For simplicity of notation we consider only the case p D 1 and denote .u1 ; : : : ; up / D u1 D u. Step 1. For any s 0, for any function f 2 C 1 , for all kuks0 1, kf.u/ks C.s; f / 1 C kuks :
(4.5.15)
This estimate was proved by Moser in [106]. We propose here a different proof, following [21]. Note that it is enough to prove (4.5.15) for u 2 C 1 .Tb /. Initialization: (4.5.15) holds for any s 2 Œ0; s0 . For a multi-index ˛ D .˛1 ; : : : ; ˛b / 2 Nb , we denote @˛y D @˛y11 @˛ybb and we set j˛j WD ˛1 C C ˛b . Recalling that s0 is an integer, by the formula for the derivative of a composition of functions, we estimate the Sobolev norm kf.u/ks0 by .1/ .q/ C.s0 / max @ˇy @qu f .; u/ @˛y u @˛y u I 0
ˇ .1/ ˇ .q/ ˇ ˇ 0 q s0 ; ˛ C C ˛ C ˇ s0 ; where ˛ .1/ ; : : : ; ˛ .q/ ; ˇ 2 Nb . By the Sobolev embedding kukL1 .Tb / .s0 kuks0 we have ˇ q @y @u f .y; u.y// 1 b C f; kuks0 : L
.T /
Hence, in order to prove the bound kf.u/ks0 .f 1 C kuks0 , it is enough to check that, for any 0 q s0 , j˛ .1/ C C ˛ .q/ j s0 , the function .1/ .q/ satisfies kvk0 C.s0 /kukqs0 : (4.5.16) v WD @˛y u @˛y u Expanding in Fourier u.y/ D
X
uk eiky , we have
k2Zb
kvk20
ˇ X ˇˇ D ˇ ˇ k2Zb
X
.ik1 /
˛ .1/
.ikq /
˛ .q/
k1 CCkq Dk
ˇ2 ˇ ˇ uk1 ukq ˇ ; ˇ
where, for w 2 Cb and ˛ 2 Nb , we use the multi-index notation w ˛ WD
(4.5.17) b Y lD1
Since j˛ .1/ j C C j˛ .1/ j s0 , the iterated Young inequality implies j.ik1 /˛
.1/
.ikq /˛
.q/
j jk1 jj˛
.1/ j
jkq jj˛
.q/ j
hk1 is0 C C hkq is0 ;
˛
wl l .
110
4 Functional setting
and therefore by (4.5.17) we have kvk20
.q
q X
Si
with
Si WD
i D1
X
X
k2Zb
k1 CCkq Dk
!2 s0
hki i juk1 j jukq j
:
An application of the Cauchy–Schwarz inequality, using X hk2 i2s0 hkq i2s0 < 1 ; k2 ;:::;kq 2Zb
provides the bound S1 .s0 kuk2q s0 . The bounds for the other Si are obtained similarly. This proves (4.5.16) and therefore (4.5.15) for s D s0 . As a consequence, (4.5.15) also trivially holds for any s 2 Œ0; s0 : in fact, for all kuks0 1, we have kf.u/ks kf.u/ks0 C.f / C.f /.1 C kuks / : Induction: (4.5.15) holds for s > s0 . We proceed by induction. Given some integer k s0 , we assume that (4.5.15) holds for any s 2 Œ0; k, and for any function f 2 C 1 . We are going to prove (4.5.15) for any s 2k; k C 1. Recall that X juk j2 hki2s s kuk2L2 C max k@yi uk2s1 : (4.5.18) kuk2s D k2Zb
i D1;:::;b
Then (4.5.18)
kf.u/ks s kf .y; u.y//kL2 C max k@yi .f .y; u.y///ks1 i D1;:::;b .@yi f /.y; u.y//s1 .s;f 1 C max i D1;:::;b C .@u f /.y; u.y//.@yi u/.y/s1 :
(4.5.19)
By the inductive assumption, k.@yi f /.y; u.y//ks1 C.f /.1 C kuks1 / :
(4.5.20)
To estimate k.@u f /.y; u.y//.@yi u/.y/ks1, we distinguish two cases: 1st case: k D s0 . Thus s 2 .s0 ; s0 C 1. Since s 1 2 .s0 1; s0 we apply Lemma 4.5.2 obtaining k.@u f /.y; u.y//.@yi u/.y/ks1 .s0 k.@u f /.y; u.y//ks0 k@yi uks1 .f kuks : (4.5.21) 2nd case: k s0 C 1. For any s 2 .k; k C 1 we have s 1 s0 and, by Lemma 4.5.1 and the inductive assumption, we obtain k.@u f /.y; u.y//.@yi u/.y/ks1 .s;f .1 C kuks1 /kuks0 C1 C kuks :
4.5 Interpolation inequalities
111
By the interpolation inequality kuks1 kuks0 C1 kuks kuks0 (Lemma 4.5.4) and s s0 C 1, we conclude that k.@u f /.y; u.y//.@yi u/.y/ks1 .s;f kuks :
(4.5.22)
Finally, by (4.5.19), (4.5.20), (4.5.21), and (4.5.22), the estimate (4.5.15) holds for all s 2k; k C 1. This concludes the iteration and the proof of (4.5.15). Step 2. Proof of (4.5.13). In order to prove the Lipschitz estimate we write
f.v/.y/ f.u/.y/ D f .y; v.y// f .y; u.y// Z 1 D .@u f /.y; u.y/ C .v u/.y//.v u/.y/d : (4.5.23) 0
Then Z kf.v/ f.u/ks (4.5.1)
Z
@u f /.y; u C .v u//.v u/ d
s
1
0 1
.s
0
k.@u f /.y; u C .v u//ks kv uks0 C .@u f /.y; u C .v u//s kv uks d : 0
(4.5.24)
Specializing (4.5.24) for v D u.2 / and u D u.1 /, using (4.5.15) and kukLip;s0 1, we deduce f.u.2 // f.u.1 / .s;f kukLip;s j2 1 j; 81 ; 2 2 ƒ : (4.5.25) s The estimates (4.5.15) and (4.5.25) imply (4.5.13). Step 3. Proof of (4.5.14). By (4.5.23), Z
1
kf.v/ f.u/kLip;s (4.5.1)
k.@u f /.y; u C .v u//.v u/kLip;s d
Z
0 1
.s
0
k.@u f /.y; u C .v u//kLip;s kv ukLip;s0 C k.@u f /.y; u C .v u//kLip;s0 kv ukLip;s d
and, using (4.5.13) and kukLip;s0 ; kvkLip;s0 1, we deduce (4.5.14) for p D 1. Estimates (4.5.13) and (4.5.14) for p 2 can be obtained in exactly the same way.
Chapter 5
Multiscale Analysis The main result of this chapter is the abstract multiscale Proposition 5.1.5, which provides invertibility properties of finite-dimensional restrictions of the Hamiltonian operator Lr; defined in (5.1.9) for a large set of parameters 2 ƒ. This multiscale Proposition 5.1.5 will be used in Chapters 10 and 11.
5.1 Multiscale proposition Let H WD L2 .Td ; R/ L2 .Td ; R/ and consider the Hilbert spaces .i/ H WD H;
.ii/ H WD H H :
(5.1.1)
Any vector h 2 H can be written as .i/ h D .h1 ; h2 /;
.ii/ h D .h1 ; h2 ; h3 ; h4 /;
hi 2 L2 .Td ; R/ :
On H, we define the linear operator J as J.h1 ; h2 / WD .h2 ; h1 / J.h1 ; h2 ; h3 ; h4 / WD .h2 ; h1 ; h4 ; h3 /
in case .i/ in case .ii/ :
(5.1.2) (5.1.3)
Moreover, in case (ii), we also define the right action of J on H as hJ D .h1 ; h2 ; h3 ; h4 /J WD .h3 ; h4 ; h1 ; h2 / :
(5.1.4)
Remark 5.1.1. The definitions (5.1.3) and (5.1.4) is motivated by the identification of an operator a 2 L.Hj ; H / with .a.1/ ; a.2/ ; a.3/ ; a.4/ / 2 H H as in (4.2.6) and (4.2.7). Then the operators Ja; aJ 2 L.Hj ; H / are identified with the vectors of H H given in (4.2.13) and (4.2.14). This corresponds to the left and right actions of J in (5.1.3) and (5.1.4). We denote by …S[F the L2 orthogonal projection on the subspace HS[F in H defined in (4.3.31) or the analogous one in H H . Note that …S[F commutes with the left (and right in case (5.1.4)) action of J . We also denote …? S[F WD Id …S[F D …G ; see (1.2.14). We fix a constant c 2 .0; 1/ such that jN ` C j C cj
0 ; h`i0
8` 2 ZjSj ; j 2 S :
(5.1.5)
114
5 Multiscale Analysis
By standard arguments, condition (5.1.5) is fulfilled by all c 2 .0; 1/ except a set of measure O. 0 /. The purely technical role of the term c…S in (5.1.6) and (5.1.7) below is explained in remarks 5.1.3 and 5.6.3. Definition 5.1.2. Given positive constants C1 ; c2 > 0, we define the class C.C1 ; c2 / of L2 self-adjoint operators acting on H, of the form, according to the cases (i)–(ii) in (5.1.1), .i/ Xr WD Xr ."; ; '/ D DV C c…S C r."; ; '/
(5.1.6)
."; /J …? S[F
.ii/ Xr; WD Xr; ."; ; '/ D DV C c…S C C r."; ; '/ (5.1.7) p z ƒ, where DV WD C V .x/, ."; / 2 R, J is the selfdefined for 2 ƒ adjoint operator
J W H H ! H H;
h 7! J h WD J hJ ;
(5.1.8)
(recall (5.1.3) and (5.1.4)) and such that 1. jrjLip;C;s1 C1 "2 , for some s1 > s0 , 2. j."; / k jLip C1 "2 for some k 2 F (set defined in (1.2.15)), X r; c2 "2 Id, see the notation (1.3.6). 3. d 1 C "2 We assume that the nonresonance conditions (1.2.7), (1.2.16), (1.2.17), and (5.1.5) hold. For simplicity of notation, in the sequel we shall also denote by Xr; the operator Xr in (5.1.6), understanding that Xr D Xr; does not depend on , i.e., J D 0. Note that the operator J defined in (5.1.8) and …? S[F D …G commute. Remark 5.1.3. The form of the operators Xr , Xr; in (5.1.6) and (5.1.7) is motivated by the application of the multiscale Proposition 5.1.5 to the operator Lr in (11.2.30) acting on H , and the operator Lr; in (10.3.44) acting on H H . We add the term c…S in (5.1.6) and (5.1.7) as a purely technical trick to prove Lemma 5.6.2 (see Remark 5.6.3). In the next proposition we prove invertibility properties of finite-dimensional restrictions of the operator
Lr; WD J ! @' C Xr; ."; /;
! D .1 C "2 /!N " ;
(5.1.9)
for a large set of 2 ƒ. For N 2 N we define the subspace of trigonometric polynomials
X i.`'Cj x/ v HN WD u.'; x/ D u`;j e ; u`;j 2 C ( where v WD
j.`;j /jN
2 in case (5.1.1)-(i) 4 in case (5.1.1)-(ii) ;
(5.1.10)
5.1 Multiscale proposition
and we denote by ˘N the corresponding L2 projector: X u`;j ei.`'Cj x/ 7! ˘N u WD u.'; x/ D
X
115
u`;j ei.`'Cj x/ :
j.`;j /jN
.`;j /2ZjSj Zd
(5.1.11) The projectors ˘N satisfy the usual smoothing estimates in Sobolev spaces: for any s; ˇ 0, k˘N uksCˇ N ˇ kuks ; k˘N? uks N ˇ kuksCˇ ˇ
k˘N ukLip;sCˇ N kukLip;s ;
k˘N? ukLip;s
N
ˇ
kukLip;sCˇ :
(5.1.12) (5.1.13)
We shall require that ! satisfies the following quadratic Diophantine nonresonance condition. Definition 5.1.4 (.NR/; ). Given 2 .0; 1/, > 0, a vector ! 2 RjSj is .NR/; nonresonant, if, for any nonzero polynomial P .X / 2 ZŒX1 ; : : : ; XjSj of the form X P .X / D n C pij Xi Xj ; n; pij 2 Z ; (5.1.14) 1i j jSj
we have
jP .!/j hpi ;
hpi WD
max
i;j D1;:::;jSj
f1; jpij jg :
(5.1.15)
The main result of this section is the following proposition. Set & WD 1=10 :
(5.1.16)
For simplicity of notation, in the next proposition Lr; also denotes J ! @' C Xr , understanding that Xr D Xr; in (5.1.6) does not depend on . Proposition 5.1.5 (Multiscale). Let !N " 2 RjSj be . 1 ; 1 /-Diophantine and satisfy property .NR/1 ;1 in Definition 5.1.4 with 1 ; 1 defined in (1.2.28). Then there are "0 > 0, 0 > 0, sN1 > s0 , NN 2 N (not depending on Xr; but possibly on the constants C1 , c2 , 0 , 0 , 1 , 1 ) and 00 > 0 (depending only on 0 ) such that the following holds: assume s1 sN1 and take an operator Xr; in C.C1 ; c2 / as in Definition 5.1.2, z Then for any " 2 .0; "0 /, there are N."/ 2 N, closed which is defined for all 2 ƒ. z subsets ƒ."I ; Xr; / ƒ, 2 Œ1=2; 1, satisfying 1. ƒ."I ; Xr; / ƒ."I 0 ; Xr; /, for all 1=2 0 1; 2. the complementary set ƒ."I 1=2; Xr; /c WD ƒ n ƒ."I 1=2; Xr;/ satisfies z . "I jƒ."I 1=2; Xr;/c \ ƒj p 3. if jr 0 rjC;s1 C j0 j ı "2 , then, for .1=2/ C ı 1, ˇ ˇ p ˇ z 0 ˇˇ . ı˛ I ˇƒ."I ; Xr 0 ;0 /c \ ƒ."I ı; Xr; / \ ƒ
(5.1.17)
(5.1.18)
116
5 Multiscale Analysis
such that, z the operator 1. 8NN N < N."/, 2 ƒ, ŒLr; 2N (5.1.19) N WD ˘N .Lr; /jH2N 1 W HN ! H2N satisfying, for all s s0 , has a right inverse ŒLr; 2N N ˇ ˇ !1 ˇ ˇ ˇ ŒLr; 2N ˇ 0 N ˇ ˇ C.s/N 0C1 N &s C jrjLip;C;s : (5.1.20) ˇ 1 C "2 ˇ ˇ ˇ Lip;s
z \ƒ z 0 , we have Moreover, for all 2 ƒ ˇ 1 ˇ ˇ 2N 1 ˇ 0 ;0 Œ L ˇ ŒLr;2N ˇ r N N s1 2.00 C&s1 /C1 0 .s1 N j jN 2 C jr r 0 jC;s1 where , 0 , r, r 0 are evaluated at fixed . 2. 8N N."/, 2 ƒ."I 1; Xr; /, the operator Lr;;N WD ˘N J ! @' C Xr; ."; / jH ; N
! D .1 C "2 /!N " ;
(5.1.21)
(5.1.22)
is invertible and, for all s s0 , ˇ L ˇ 0 r;;N 1 ˇ ˇ C.s/N 2. C&s1 /C3 N &.ss1 / C jrjLip;C;s : (5.1.23) ˇ ˇ 2 Lip;s 1C" z \ƒ z 0, Moreover for all 2 ƒ ˇ ˇ 1 1 2. 0 C&s1 /C1 ˇ ˇL j 0 jN 2 C jr r 0 jC;s1 r;;N Lr 0 ;0 ;N s1 C.s1 /N (5.1.24) where , 0 , r, r 0 are evaluated at fixed . Remark 5.1.6. The measure of the set ƒ."I 1=2; Xr; /c is smaller than "p , for any p, at the expense of taking a larger constant 0 (see Remark 5.8.17). We have written " in (5.1.17) for definiteness. Remark 5.1.7. Properties 1–3 for the sets ƒ."I ; Xr; / are stable under finite intersection: if .ƒ.1/ ."I ; Xr; //; : : : ; .ƒ.p/ ."I ; Xr; //
\ ƒ.k/ ."I ; Xr; // are families of closed subsets of ƒ satisfying 1–3, then the family . still satisfies these properties. 1kp The proof of Proposition 5.1.5 is based on the multiscale analysis developed in the papers [23,24], and [27] for quasiperiodically forced NLW and Schr¨odinger equations, but it is more complicated in the present autonomous setting. The proof of the multiscale Proposition 5.1.5 is given in Sections 5.2–5.8.
117
5.2 Matrix representation
5.2 Matrix representation We decompose the operator Lr; in (5.1.9), with Xr; as in (5.1.7) or (5.1.6) (in such a case we mean that J D 0), as
Lr; D D! C T ; D! WD J ! @' C Dm C J ; (5.2.1) T WD DV Dm J …S[F C r C c…S p p where DV WD C V .x/ is defined in (3.1.10), Dm WD C m in (4.3.18), and ! D .1 C "2 /!N " . By Proposition 4.4.1 the matrix that represents the operator DV Dm in the exponential basis has off-diagonal decay. In what follows we identify a linear operator A.'/, ' 2 TjSj , acting on func/ of 2 2 tions h.'; x/, with the infinite-dimensional matrix .A`;j `0 ;j 0 f.`;j /;.`0 ;j 0 /2Zb g in case (5.1.1)-(i), respectively 4 4 in case (5.1.1)-(ii), defined by matrices A`;j `0 ;j 0 the relation (4.2.26). In this way, the operator Lr; in (5.2.1) is represented by the infinite-dimensional Hermitian matrix
A."; / WD A."; I r/ WD D! C T ;
(5.2.2)
of 2 2 matrices in case (i), resp. 4 4 matrices in case (ii), where the diagonal part is, in case (i),
hj im i! ` ; D! D Diagi 2Zb i! ` hj im (5.2.3) p b jSj d 2 i WD .`; j / 2 Z WD Z Z ; hj im WD jj j C m : In case (ii), recalling the definition of J in (5.1.8), of the left and right action of J in (5.1.3) and (5.1.4), and choosing the basis of C4 ff1 ; f2 ; f3 ; f4 g WD fe3 e2 ; e1 C e4 ; e1 e4 ; e2 C e3 g ; 1 ; : : : ; 0/ 2 C4 ; ea WD .0; : : : ; „ƒ‚… at h
we have 0
1 hj im i! ` 0 0 0 0 B i! ` hj im C : D! D Diag.`;j /2Zb @ 0 0 hj im C i! ` A 0 0 i! ` hj im C The off-diagonal matrix 0 T WD Tii i 2Zb ;i 0 2Zb ; b D jSj C d;
0
0
(5.2.4)
0
0
Tii WD .DV Dm /jj ŒJ …S[F jj C rii ; (5.2.5)
118
5 Multiscale Analysis 0
where Tii are 2 2, resp. 4 4 matrices, satisfies, by (4.4.1), Lemma 4.3.8, property 1 of Definition 5.1.2, jTjC;s1 C.s1 / : (5.2.6) 0
Note that .Tii / D Tii 0 . Moreover, since the operator T D T .'/ in (5.2.1) is a 'dependent family of operators acting on H (as r defined in (5.1.6) and (5.1.7), which 0 0 0 is the only '-dependent operator), the matrix T is Toeplitz in `, namely Tii D T``;j;j depends only on the indices ` `0 ; j; j 0 . We introduce a further index a 2 I where I WD f1; 2g in case (5.1.1)-(i);
I WD f1; 2; 3; 4g in case (5.1.1)-(ii) ;
(5.2.7)
to distinguish the matrix elements of each 2 2, resp. 4 4, matrix 0 0 0 Tii WD Tii;a;a : 0 a;a 2I
Under the unitary change of variable (basis of eigenvectors)
1 1 1 U WD Diag.`;j /2Zb p 2 i i
(5.2.8)
the matrix D! in (5.2.3) becomes completely diagonal
0 hj im ! ` 1 D! WD U D! U D Diag.`;j /2Zb 0 hj im C ! `
(5.2.9)
and, under the unitary transformation 0 1 1 1 0 0 1 B i i 0 0 C U WD Diag.`;j /2Zb p @ A; 2 0 0 1 1 0 0 i i
(5.2.10)
the matrix in (5.2.4) becomes completely diagonal D! WD U 1 D! U D Diag.`;j /2Zb 0 1 0 0 0 hj im ! ` 0 hj im C ! ` 0 0 B C @ A: 0 0 0 hj im C ! ` 0 0 0 hj im C C ! ` (5.2.11) Under the unitary change of variable U in (5.2.8) in case (5.1.1)-(i), (5.2.10) in case (5.1.1)-(ii), the hermitian matrix A in (5.2.2) transforms in the Hermitian matrix A."; / WD A WD U 1 AU D D! C T;
0
0
Tii WD U 1 Tii U ;
(5.2.12)
119
5.2 Matrix representation
where the off-diagonal term T satisfies, by (5.2.6), jT jC;s1 C.s1 / :
(5.2.13)
We introduce the one-parameter family of infinite-dimensional matrices A."; ; / WD A."; / C Y WD D! C Y C T; where
1 0 0 1 B0 Y WD Diagi 2Zb @ 0 0 Y WD Diagi 2Zb
0 1
2 R;
(5.2.14)
in case (5.1.1)-(i) ;
1 0 0 0 1 0 0C 0 1 0A 0 0 1
(5.2.15) in case (5.1.1)-(ii) :
The reason for adding Y is that translating the time Fourier indices .`; j / 7! .` C `0 ; j / in A."; / gives A."; ; / with D ! `0 . Note that the matrix T remains unchanged under translation because it is Toeplitz with respect to `. Remark 5.2.1. The matrix A."; ; / WD A."; ; I r/ in (5.2.14) represents the L2 self-adjoint operator
Lr; . / WD J ! @' C iJ C DV C J …? S[F C c…S C r :
(5.2.16)
M ; / WD A."; ; I 0/ the matrix that repreIn Section 5.7 we shall denote by A."; sents L0; . / WD J ! @' C iJ C DV C J …? (5.2.17) S[F C c…S : M ; / is diagonal in ` 2 ZjSj . Note that L0; . / is independent of ', thus A."; The eigenvalues of the 22 matrix D! CY , with D! in (5.2.9) and Y defined in (5.2.15)-case (i), resp. 44 matrix with D! in (5.2.11) and Y defined in (5.2.15)-case (ii), are ( hj im .! ` C / if a D 1 di;a. / D (5.2.18) hj im C .! ` C / if a D 2 ; and, in the second case, 8 ˆ hj i .! ` C / ˆ ˆ m < hj im C .! ` C / di;a . / D ˆ hj im C .! ` C / ˆ ˆ : hj im C C .! ` C /
if a D 1 if a D 2 if a D 3 if a D 4 :
(5.2.19)
120
5 Multiscale Analysis
The main goal of the following sections is to prove polynomial off-diagonal decay for the inverse of the jIj.2N C 1/b -dimensional (where jIj D 2, resp. 4, in case (5.1.1)-(i), resp. (5.1.1)-(ii)) submatrices of A."; ; / centered at .`0 ; j0 / denoted by AN;`0 ;j0 ."; ; / WD Aj``0jN;jj j0 jN ."; ; / (5.2.20) where j`j WD maxfj`1 j; : : : ; j`jSj jg;
jj j WD maxfjj1 j; : : : ; jjd jg :
(5.2.21)
If `0 D 0 we use the simpler notation AN;j0 ."; ; / WD AN;0;j0 ."; ; / :
(5.2.22)
If also j0 D 0, we simply write AN ."; ; / WD AN;0 ."; ; / ;
(5.2.23)
and, for D 0, we denote AN;j0 ."; / WD AN;j0 ."; ; 0/;
AN ."; / WD AN;0 ."; ; 0/ :
(5.2.24)
Remark 5.2.2. The matrix AN ."; / in (5.2.24) represents the truncated self-adjoint operator ˘N .Lr;/jHN D ˘N .J ! @' C Xr; /jHN D ˘N .J ! @' C DV C J …? S[F C r/jHN ; where HN is defined in (5.1.10) and ˘N in (5.1.11). We have the following crucial covariance property AN;`1 ;j1 ."; ; / D AN;j1 ."; ; C ! `1 / :
(5.2.25)
5.3 Multiscale step The main result of this section is the multiscale step Proposition 5.3.4, which is a variant of that proved in [24]. The constant & 2 .0; 1/ is fixed and 0 > 0, ‚ 1 are real parameters, on which we shall impose some condition in Proposition 5.3.4. Given ; 0 E Zb I, where I is defined in (5.2.7), we define diam.E/ WD sup jk k 0 j; k;k 0 2E
d.; 0 / WD
inf
k2 ;k 0 2 0
jk k 0 j ;
5.3 Multiscale step
121
where, for k D .i; a/, k 0 WD .i 0 ; a0 / 2 Zb I, we set 8 ˆ if i D i 0 ; a ¤ a0 ; 1. k0 For a matrix A 2 ME E we define Diag.A/ WD .ıkk 0 Ak /k;k 0 2E . Proposition 5.3.4 (Multiscale step). Assume
0 > 2 C b C 1;
& 2 .0; 1=2/;
C1 2 ;
and, setting WD 0 C b C s0 , 0 2 b > 3. C .s0 C b/C1 / ; & > C1 ; s1 > 3 C . C b/ C C1 s0 :
(5.3.3)
(5.3.4) (5.3.5) (5.3.6)
For any given ‡ > 0, there exist ‚ WD ‚.‡; s1 / > 0 large enough (appearing in Definition 5.3.2), and N0 .‡; ‚; s1/ 2 N such that: 8N N0 .‡; ‚; s1/, 8E Zb I with diam.E/ 4N 0 D 4N (see (5.3.2)), if A 2 ME E satisfies (H1) (Off-diagonal decay) jA Diag.A/jC;s1 ‡ 1=2 1 1=2 A Dm k0 .N 0 / (H2) (L2 bound) kDm
(H3) (Separation properties) There is a partition of the .A; N /-bad sites B D [˛ ˛ with diam.˛ / N C1 ;
d.˛ ; ˇ / N 2 ; 8˛ ¤ ˇ ;
(5.3.7)
then A is N 0 -good. More precisely ˇ 1=2 1 1=2 ˇ 1 0 0 &s ˇD A Dm ˇs N 0 .N / C jA Diag.A/jC;s ; m 4 (5.3.8) and, for all s s1 , 8s 2 Œs0 ; s1 ;
ˇ 1=2 1 1=2 ˇ 0 0 &s ˇD A Dm ˇs C.s/ N 0 .N / C jA Diag.A/jC;s : m
(5.3.9)
Remark 5.3.5. The main difference with respect to the multiscale Proposition 4.1 in [24] is that, since the Definition 5.3.2 of regular sites is weaker than that in [24], we require the stronger assumption (H1) concerning the off-diagonal decay of A in j jC;s1 norm defined in (4.3.17), while in [24] we only require the off-diagonal decay of A in j js1 norm. Another difference is that we prove (5.3.8) (with the constant 1=4) for s 2 Œs0 ; s1 , and not in a larger interval Œs0 ; S for some S s1 . For larger s s1 we prove (5.3.9) with C.s/.
5.4 Separation properties of bad sites
123
Proof of Proposition 5.3.4. The multiscale step Proposition 5.3.4 follows by Proposition 4.1 in [24], that we report in the Appendix B.2, see Proposition B.2.4. Set
T WD A Diag.A/;
(H1)
jT jC;s1 ‡ ;
(5.3.10)
and consider the matrix 1=2 1=2 ADm D Diag.AC / C T ; AC WD Dm
(5.3.11)
where 1=2 1=2 Diag.A/Dm ; Diag.AC / WD Dm
1=2 1=2 T WD Dm T Dm :
(5.3.12)
We apply the multiscale Proposition B.2.4 to the matrix AC . By (H2) the matrix AC is invertible and 1 A C
1=2 1 1=2 (H2) D Dm A Dm 0 .N 0 / :
(5.3.11) 0
Moreover, the decay norm jT js1
ˇ 1=2 ˇ 1=2 ˇ D ˇ Dm T Dm s
(5.3.12)
1
(4.3.17)
D jT jC;s1
(5.3.10)
‡:
Finally the .AC ; N /-bad sites according to Definition B.2.3 coincide with the .A; N /bad sites according to Definition 5.3.3 (with a ‚ replaced by c.m/‚). Hence by (H3) the separation properties required to apply Proposition B.2.4 also hold, and we deduce that ˇ ˇ ˇ ˇ 0 ˇ 1 N 0 N 0 &s C ˇAC Diag.AC /ˇ ; 8s 2 Œs0 ; s1 ; ˇA1 C s s 4 which, recalling (5.3.11), (5.3.10), and (5.3.12), implies (5.3.8). The more general estimate (5.3.9) follows by (B.2.8).
5.4 Separation properties of bad sites The aim of this section is to verify the separation properties of the bad sites required in the multiscale step Proposition 5.3.4. Let A WD A."; ; / be the infinite-dimensional matrix defined in (5.2.14). Given N 2 N and i D .`0 ; j0 /, recall that the submatrix AN;i is defined in (5.2.20). Definition 5.4.1 (N -regular/singular site). A site k WD .i; a/ 2 Zb I is: N -regular if AN;i is N -good (Definition 5.3.1). N -singular if AN;i is N -bad (Definition 5.3.1).
124
5 Multiscale Analysis
We also define the N -good/bad sites of A. Definition 5.4.2 (N -good/bad site). A site k WD .i; a/ 2 Zb I is: N -good if k is regular .Def. 5.3.2/ or all the sites k 0 with d.k 0 ; k/ N are N -regular : (5.4.1) N -bad if k is singular .Def. 5.3.2/ and 9k 0 with d.k 0 ; k/ N; k 0 is N -singular : (5.4.2) Remark 5.4.3. A site k that is N -good according to Definition 5.4.2, is .AE E ; N /good according to Definition 5.3.3, for any set E D E0 I containing k, where E0 Zb is a product of intervals of length N . Let
n o BN .j0 I / WD 2 RW AN;j0 ."; ; / is N -bad :
(5.4.3)
Definition 5.4.4 (N -good/bad parameters). A parameter 2 ƒ is N -good for A if [ Iq ; ˛ WD 3d C2jSj C4C3 0 ; (5.4.4) 8 j0 2 Zd ; BN .j0 I / qD1;:::;N ˛d jSj
where Iq are intervals with measure jIq j N . Otherwise, we say that is N -bad. We define the set of N -good parameters n o z is N -good for A."; / : GN WD 2 ƒW (5.4.5) The main result of this section is Proposition 5.4.5, which enables us to verify the assumption (H3) of Proposition 5.3.4 for the submatrices AN 0 ;j0 ."; ; /. Proposition 5.4.5 (Separation properties of N -bad sites). Let 1 ; 1 ; 2 ; 2 be fixed as in (1.2.28) and (5.5.9), depending on the parameters 0 ; 0 that appear in properties (1.2.6)–(1.2.8) and (1.2.16)–(1.2.19). Then there exist C1 WD C1 .d; jSj; 0 / 2,
? WD ? .d; jSj; 0 /, and NN WD NN .jSj; d; 0 ; 0 ; m; ‚/ such that, if N NN and (i) is N -good for A, (ii) ? , (iii) !N " satisfies (1.2.29) and ! D .1 C "2 /!N " satisfies .NR/2 ;2 (see Definition 5.1.4), then, 8 2 R, the N -bad sites k D .`; j; a/ 2 ZjSj Zd I of A."; ; / admit a partition [˛ ˛ in disjoint clusters satisfying diam.˛ / N C1 .d;jSj;0 / ;
d.˛ ; ˇ / > N 2 ;
8˛ ¤ ˇ :
(5.4.6)
5.4 Separation properties of bad sites
125
We underline that the estimates (5.4.6) are uniform in . Remark 5.4.6. Hypothesis (ii) in Proposition 5.4.5 just requires that the constant is larger than some ? .d; jSj; 0 /. This is important in the present autonomous setting for the choice of the constants in section 5.5. On the contrary, in the corresponding propositions in [23] and [24], the constant was required to be large with the exponent in (5.3.2). The rest of this section is devoted to the proof of Proposition 5.4.5. In some parts of the proof, we may point out the dependence of some constants on parameters such as 2 ; 1 , which, by (1.2.28) and (5.5.9), amounts to a dependence on 0 . Definition 5.4.7 (-chain). A sequence k0 ; : : : ; kL 2 ZjSj Zd I of distinct integer vectors satisfying jkqC1 kq j ;
8q D 0; : : : ; L 1 ;
for some 2, is called a -chain of length L. We want to prove an upper bound for the length of a -chain of -singular integer vectors, i.e., satisfying (5.4.8) below. In the next lemma we obtain the upper bound (5.4.10) under the assumption (5.4.9). Define the functions of signs 1 ; 2 W I ! f1; C1g 1 .1/ WD 1 .2/ WD 2 .1/ WD 2 .3/ WD 1 ; 1 .3/ WD 1 .4/ WD 2 .2/ WD 2 .4/ WD 1 :
(5.4.7)
Lemma 5.4.8 (Length of -chain of -singular sites). Assume that ! satisfies the N m; ‚; 2; 2 / nonresonance condition .NR/2 ;2 (Definition 5.1.4). For .d; jSj d large enough, consider a -chain .`q ; jq ; aq /qD0;:::;L Z Z I of singular sites for the matrix A."; ; /, namely ˇ ˇ 8q D 0; : : : ; L; ˇhjq im hjq im C 1 .aq / C 2 .aq /.! `q C / ˇ < ‚ ; (5.4.8) with 1 ; 2 defined in (5.4.7), such that, 8|Q 2 Zd , the cardinality jf.`q ; jq ; aq /qD0;:::;L W jq D |Qgj K :
(5.4.9)
Then there is C2 WD C2 .d; 2 / > 0 such that its length is bounded by L .K/C2 :
(5.4.10)
Moreover, if ` is fixed (i.e., `q D `, 8q D 0; : : : ; L) the same result holds without assuming that ! satisfies .NR/2 ;2 .
126
5 Multiscale Analysis
The proof of Lemma 5.4.8 is a variant of Lemma 4.2 in [23]. We split it in several steps. First note that it is sufficient to bound the length of a -chain of singular sites when D 0. Indeed, suppose first that D ! `N for some `N 2 ZjSj . For a chain of -singular sites .`q ; jq ; aq /qD0;:::;L , see (5.4.8), the translated -chain .`q C N jq ; aq /qD0;:::;L , is formed by 0-singular sites, namely `; ˇ ˇ N ˇ < ‚: ˇhjq im hjq im C 1 .aq / C 2 .aq /.! .`q C `// For any 2 R, we consider an approximating sequence ! `Nn ! , `Nn 2 ZjSj . Indeed, by Assumption (1.2.29), ! D .1 C "2 /!N " is not colinear to an integer vector and therefore the set f! `; ` 2 ZjSj g is dense in R. A -chain of -singular sites (see (5.4.8)) is, for n large enough, also a -chain of ! `Nn -singular sites. Then we bound its length arguing as in the above case. We first prove Lemma 5.4.8 in a particular case. Lemma 5.4.9. Assume that ! satisfies .NR/2 ;2 (Definition 5.1.4). Let .`q ; jq ; aq /qD0;:::;L be a -chain of integer vectors of ZjSj Zd I satisfying, 8q D 0; : : : ; L, ˇq ˇ ˇ ˇ ˇ jjq j2 C m C 2 .aq / ! `q ˇ ‚ ; in case (i) of Def. 5.1.2 ; (5.4.11) ˇ ˇ hj i q ˇ ˇq ˇ ˇ ˇ jjq j2 C m C 1 .aq / C 2 .aq / ! `q ˇ ‚ ; in case (ii) of Def. 5.1.2 ; ˇ hj i ˇ q (5.4.12) where 1 ; 2 are defined in (5.4.7). Suppose, in case (5.4.12), that the product of the signs 1 .aq /2 .aq / is the same for any q 2 ŒŒ0; L. Then, for some constant C1 WD C1 .d; 2 / and C WD C.m; ‚; 2; d; 2 /, its length L is bounded by L C.K/C1 ;
(5.4.13)
where K is defined in (5.4.9). Moreover, if ` is fixed (i.e., `q D `, 8q), the lemma holds without assuming that ! satisfies .NR/2 ;2 . Proof. We make the proof when (5.4.12) holds, since (5.4.11) is a particular case of (5.4.12) setting D 0 (note that, in the case (5.4.11), the conclusion of the lemma follows without conditions on the signs 2 .aq /, see Remark 5.4.11). We introduce the quadratic form QW R Rd ! R defined by Q.x; y/ WD x 2 C jyj2
(5.4.14)
and the associated bilinear symmetric form ˚ W .R Rd /2 ! R defined by (5.4.15) ˚ .x; y/; .x 0 ; y 0 / WD xx 0 C y y 0 :
127
5.4 Separation properties of bad sites
Note that ˚ is the sum of the bilinear forms ˚ D ˚1 C ˚2 ˚1 .x; y/; .x 0 ; y 0 / WD xx 0 ; ˚2 .x; y/; .x 0; y 0 / WD y y 0 : Lemma 5.4.10. For all q; q0 2 ŒŒ0; L, ˇ ˇ ˇ˚ .xq ; jq /; .xq xq ; jq jq / ˇ C 2 jq q0 j2 C C.‚/ ; 0 0 0 0
(5.4.16)
(5.4.17)
where xq WD ! `q . Proof. Set for brevity 1;q WD 1 .aq / and 2;q WD 2 .aq /. First note that by (5.4.12) we have ˇ ˇ ˇjjq j2 C m .1;q C 2;q ! `q /2 ˇ C ‚ : Therefore ˇ ˇ ˇ .! `q /2 C jjq j2 21;q 2;q ! `q ˇ C 0 ;
C 0 WD C ‚ C m C 2 ;
and so, recalling (5.4.14), for all q D 0; : : : ; L, jQ.xq ; jq / 21;q 2;q xq j C 0
where xq WD ! `q :
(5.4.18)
By the hypothesis of Lemma 5.4.9, 1;q 2;q D 1;q0 2;q0 ;
8q; q0 2 ŒŒ0; L ;
and, by bilinearity, we get Q.xq ; jq / 21;q 2;q xq D Q.xq0 ; jq0 / C 2˚ .xq0 ; jq0 /; .xq xq0 ; jq jq0 / C Q.xq xq0 ; jq jq0 / 21;q0 2;q0 xq0 21;q 2;q .xq xq0 / :
(5.4.19)
Recalling (5.4.14) and the Definition 5.4.7 of -chain we have, 8q; q0 2 ŒŒ0; L, ˇ ˇ ˇQ.xq xq ; jq jq / 21;q 2;q .xq xq /ˇ C 2 jq q0 j2 : (5.4.20) 0 0 0 Hence (5.4.19), (5.4.18), and (5.4.20) imply (5.4.17).
Remark 5.4.11. In the case (5.4.11), the conclusion of Lemma 5.4.10 follows without conditions on the signs 2 .aq /. This is why Lemma 5.4.9 holds without condition on the signs 2 .aq /.
128
5 Multiscale Analysis
Proof of Lemma 5.4.9 continued. We introduce the subspace of Rd C1 o n G WD SpanR .xq xq 0 ; jq jq 0 /W 0 q; q 0 L o n D SpanR .xq xq0 ; jq jq0 /W 0 q L
(5.4.21)
and we call g d C 1 the dimension of G. Introducing a small parameter ı > 0, to be specified later (see (5.4.38)), we distinguish two cases. Case I. 8q0 2 ŒŒ0; L, ˚ SpanR .xq xq0 ; jq jq0 /W jq q0 j Lı ; q 2 ŒŒ0; L D G :
(5.4.22)
We select a basis of G Rd C1 from .xq xq0 ; jq jq0 / with jq q0 j Lı , say fs WD .xqs xq0 ; jqs jq0 / D .! s `; s j /;
s D 1; : : : ; g ;
(5.4.23)
where .s `; s j / WD .`qs `q0 ; jqs jq0 / satisfies, by the Definition 5.4.7 of chain, j.s `; s j /j C jqs q0 j C Lı : (5.4.24) Hence
jfs j C Lı ;
8s D 1; : : : ; g :
(5.4.25)
Then, in order to derive from (5.4.17) a bound on .xq0 ; jq0 / or its projection onto G, we need a nondegeneracy property for QjG . The following lemma states it. Lemma 5.4.12. Assume that ! satisfies .NR/2 ;2 . Then the matrix 0 g WD ss s;s 0 D1 ; is invertible and
0 ss WD ˚ fs 0 ; fs ;
ˇ ˇ C3 .d;2 / ˇ 1 s 0 ˇ ; ˇ s ˇ C Lı
8s; s 0 D 1; : : : ; g ;
(5.4.26)
(5.4.27)
where the multiplicative constant C depends on 2 . Proof. According to the splitting (5.4.16) we write as WD ˚1 .fs 0 ; fs / C ˚2 .fs 0 ; fs / s;s 0 D1;:::;g D S C R
(5.4.28)
where, by (5.4.23), 0
Sss WD ˚1 .fs 0 ; fs / D .! s 0 `/.! s `/ ; 0
Rss WD ˚2 .fs 0 ; fs / D s 0 j s j :
(5.4.29)
129
5.4 Separation properties of bad sites
The matrix R D .R1 ; : : : ; Rg / has integer entries (the Ri 2 Zg denote the columns). The matrix S WD .S1 ; : : : ; Sg / has rank 1 since all its columns Ss 2 Rg are colinear: Ss D .! s `/.! 1 `; : : : ; ! g `/> ;
s D 1; : : : ; g :
(5.4.30)
We develop the determinant (5.4.28)
P .!/ WD det D det.S C R/ D det.R/ det.S1 ; R2 ; ; Rg / det.R1 ; : : : ; Rg1 ; Sg / X .1/gCs Ss .R1 ^ ^ Rs1 ^ RsC1 ^ ^ Rg / D det.R/ 1sg
(5.4.31) using that the determinant of matrices with two columns Si , Sj , i ¤ j , is zero. By (5.4.30), the expression in (5.4.31) is a polynomial in ! of degree 2 of the form (5.1.14) with coefficients j.n; p/j
(5.4.29);(5.4.24)
C.Lı /2s C.Lı /2.d C1/ :
(5.4.32)
If P ¤ 0 then the nonresonance condition .NR/2 ;2 implies jdet j D jP .!/j
(5.1.15)
2 hpi2
(5.4.32)
2 ı C.L /22.d C1/
:
(5.4.33)
In order to conclude the proof of the lemma, we have to show that P .!/ is not identically zero in !. We have that P .i!/ D det ˚1 .fs 0 ; fs / C ˚2 .fs 0 ; fs / s;s 0 D1;:::g D det.fs 0 fs /s;s 0 D1;:::;g > 0 because .fs /1sg is a basis of G. Thus P is not the zero polynomial. By (5.4.33), the Cramer rule, and (5.4.25) we deduce (5.4.27).
Remark 5.4.13. As recently proved in [31] the same result also holds assuming just that ! is Diophantine, instead of the quadratic nonresonance condition .NR/2 ;2 . Proof of Lemma 5.4.9 continued. We introduce n o G ?˚ WD z 2 Rd C1 W ˚.z; f / D 0; 8f 2 G : Since is invertible (Lemma 5.4.12), ˚jG is nondegenerate, hence Rd C1 D G ˚ G ?˚ and we denote by PG W Rd C1 ! G the corresponding projector onto G.
130
5 Multiscale Analysis
We are going to estimate g X as 0 fs 0 : PG xq0 ; jq0 D
(5.4.34)
s 0 D1
For all s D 1; : : : ; g, and since fs 2 G, we have g (5.4.34) X as 0 ˚ fs 0 ; fs ˚ xq0 ; jq0 ; fs D ˚ PG xq0 ; jq0 ; fs D s 0 D1
that we write as the linear system 0
a D b;
1
a1 B :: C a WD @ : A ; ag
0 1 ˚ .xq0 ; jq0 /; f1 C B C B :: b WD B C : A @ ˚ .xq0 ; jq0 /; fg
(5.4.35)
with defined in (5.4.26). Lemma 5.4.14. For all q0 2 ŒŒ0; L we have ˇ ˇ ˇPG .xq ; jq /ˇ C Lı C4 .d;2 / ; 0 0
(5.4.36)
where C depends on 2 ; ‚. Proof. We have (5.4.35)
jbj
.
ˇ ˇ (5.4.23);(5.4.17) ˇ ˇ max ˇ˚ .xq0 ; jq0 /; fs ˇ C.‚/ 2 max jqs q0 j2
1sg
1sg
C.‚/.Lı /2 ;
recalling that, by (5.4.22), the indices .qs /1sg were selected such that jqs q0 j Lı . Hence, by (5.4.35) and (5.4.27), jaj D j1 bj C. 2 ; ‚/.Lı /C
(5.4.37)
for some constant C WD C.d; 2 /. We deduce (5.4.36) by (5.4.34), (5.4.37) and (5.4.25). We now complete the proof of Lemma 5.4.9 when case I holds. As a consequence of Lemma 5.4.14, for all q1 ; q2 2 ŒŒ0; L, we have ˇ ˇ ˇ ˇ ˇ xq ; jq xq ; jq ˇ D ˇPG .xq ; jq / .xq ; jq / ˇ C Lı C4 .d;2 / ; 1 1 2 2 1 1 2 2
131
5.4 Separation properties of bad sites
where C depends on 2 ; ‚. Therefore, for all q1 ; q2 2 ŒŒ0; L, we have jjq1 jq2 j C.Lı /C4 .d;2 / , and so ˚ C .d; / diam jq W 0 q L C Lı 4 2 : Since all the jq are in Zd , their number (counted without multiplicity) does not exceed C.Lı /C4 .d;2 /d , for some other constant C that depends on 2 ; 2 ; d . Thus we have obtained the bound C .d; /d ˚ ] jq W 0 q L C Lı 4 2 : By assumption (5.4.9), for each q0 2 ŒŒ0; L, the number of q 2 ŒŒ0; L such that jq D jq0 is at most K, and so C .d; / L C Lı 5 2 K;
C5 .d; 2 / WD C4 .d; 2 /d :
Choosing ı > 0 such that ıC5 .d; 2 / < 1=2 ;
(5.4.38)
we get L C. C5 .d;2 / K/2 , for some multiplicative constant C that may depend on 2 ; ‚; d . This proves (5.4.13). Case II. There is q0 2 ŒŒ0; L such that ˚ dim SpanR .xq xq0 ; jq jq0 /W jq q0 j Lı ; q 2 ŒŒ0; L g 1 ; namely all the vectors .xq ; jq / stay in an affine subspace of dimension less than g 1. Then we repeat on the subchain .`q ; jq /, jq q0 j Lı , the argument of case I, to obtain a bound for Lı (and hence for L). Applying at most .d C 1/ times the above procedure, we obtain a bound for L of the form L C.K/C.d;2/ . This concludes the proof of (5.4.13) of Lemma 5.4.9. To prove the last statement of Lemma 5.4.9, note that, if ` is fixed, then (5.4.17) reduces just to jjq .jq jq0 /j C 2 jq q0 j2 and the conclusion of the lemma follows as above (see also Lemma 5.2 in [24]; actually, it is the same argument for the NLS equation in [39]). Proof of Lemma 5.4.8. Consider a general -chain .kq /qD0;:::;L D .`q ; jq ; aq /qD0;:::;L of singular sites satisfying (5.4.12). To fix ideas assume 1 .a0 /2 .a0 / > 0. Then the integer vectors kq along the chain with the same sign 1 .aq /2 .aq / > 0, say kqm , satisfy qmC1 qm C.K/C1 ; by Lemma 5.4.9 applied to each subchain of consecutive indices with 1 .aq /2 .aq / < 0. Hence we deduce that all such kqm form a 0 WD C .K/C1 -chain and all of them
132
5 Multiscale Analysis
have the same sign 1 .aqm /2 .aqm / > 0. It follows, again by Lemma 5.4.9, that their length is bounded by C1 C. 0 K/C1 D C C .K/C1 K D C 0 .K/C1.C1 C1/ : Hence the length of the original -chain .kq /qD0;:::;L D .`q ; jq ; aq /qD0;:::;L satisfies L C.K/C1 C 0 .K/C1.C1 C1/ .K/C2 ; where C2 WD C2 .d; 2 / D C1 .C1 C2/C1, and provided is large enough, depending on m; ‚; 2 ; 2 ; d . This proves (5.4.10). Finally, the last statement of Lemma 5.4.8 follows as well by the last statement of Lemma 5.4.9. We fix
? WD 1 .2 C C2 .3 C ˛// C 1 ;
(5.4.39)
where 1 is the Diophantine exponent of !N " in (1.2.29), C2 is the constant defined in Lemma 5.4.8, and ˛ is defined in (5.4.4). The next lemma proves an upper bound for the length of a chain of N -bad sites. Lemma 5.4.15 (Length of -chain of N -bad sites). Assume (i)–(iii) of Proposition 5.4.5 with ? defined in (5.4.39). Then, for N large enough (depending on m; ‚; 2 ; 2 ; 1 ; 1 ), any N 2 -chain .`q ; jq ; aq /qD0;:::;L of N -bad sites of A."; ; / (see Definition 5.4.2) has length L N .3C˛/C2 ;
(5.4.40)
where C2 is defined in Lemma 5.4.8 and ˛ in (5.4.4). Proof. Arguing by contradiction we assume that L > l WD N .3C˛/C2 :
(5.4.41)
We consider the subchain .`q ; jq ; aq /qD0;:::;l of N -bad sites, which, recalling (5.4.2), are singular sites. Then, for N large enough, the assumption (5.4.9) of Lemma 5.4.8 with D N 2 , and L replaced by l, can not hold with K < N 1C˛ , otherwise (5.4.10) would imply l < .N 2 N 1C˛ /C2 D N .3C˛/C2 , contradicting (5.4.41). As a consequence, there exists |Q 2 Zd , and distinct indices qi 2 ŒŒ0; l, i D 0; : : : ; M WD ŒN ˛C1 =4, such that ˛C1 N : (5.4.42) 0 qi l and jqi D |Q; 8i D 0; : : : ; M WD 4 Since the sites .`qi ; |Q; aqi /i D0;:::;M belong to an N 2 -chain of length l, the diameter of the set E WD f`qi ; aqi gi D0;:::;M ZjSj I satisfies diam.E/ N 2 l :
5.4 Separation properties of bad sites
133
Moreover, each site .`qi ; |Q; aqi / is N -bad and therefore, recalling (5.4.2), it is in an N neighborhood of some N -singular site. Let K be the number of N -singular sites .`; j; a/ such that jj |Qj N and d.E; .`; a// N . Then the cardinality jEj CN jSj K. Moreover there is |Q 0 2 Zd with j|Q0 |Qj N such that there are at least K=.CN d / N -singular sites .`; |Q 0 ; a/ with distance d.E; .`; a// N . Let n o E 0 WD .`0 ; a0 / 2 ZjSj IW .`0 ; |Q0 ; a0 / is N singular and d.E; .`0 ; a0 // N : By what precedes the cardinality jEj N ˛C1d jSj > N ˛d jSj ; d CjSj C CN and diam.E 0 / N 2 l C 2N :
jE 0 j
(5.4.43)
Recalling Definition 5.4.1 and the covariance property (5.2.25), 8.`0 ; a0 / 2 E 0 ;
AN;`0 ;|Q0 ."; / D AN;|Q0 ."; ; ! `0 /
is N -bad ;
and, recalling (5.4.3), 8.`0 ; a0 / 2 E 0 ;
! `0 2 BN .|Q0 I / :
(5.4.44)
Since is N -good (Definition 5.4.4), (5.4.4) holds and therefore [ Iq where Iq are intervals with measure jIq j N : BN .|Q0 I / qD0;:::;N ˛d jSj
(5.4.45)
By (5.4.44) and (5.4.45) and since, by (5.4.43), the cardinality jE 0 j N ˛d jSj , there are two distinct integer vectors `1 , `2 2 E 0 such that ! `1 ; ! `2 belong to the same interval Iq . Therefore j! .`1 `2 /j D j! `1 ! `2 j jIq j N :
(5.4.46)
Moreover, since by (1.2.29) the frequency vectors ! D .1 C "2 /!N " , 8 2 ƒ are Diophantine, namely 1 ; 8` 2 ZjSj n f0g ; j! `j 2j`j1 we also deduce j! .`1 `2 /j
1 j`1 `2 j1
(5.4.43)
(5.4.41)
1 .diam.E 0 //1 1 .N 2 l C N /1 1 : .2N 2C.3C˛/C2 /1
(5.4.47)
The conditions (5.4.46) and (5.4.47) contradict, for N large enough, the assumption that ? where ? is defined in (5.4.39).
134
5 Multiscale Analysis
Proof of Proposition 5.4.5 completed. We introduce the following equivalence relation in the set n o SN WD k D .`; j; a/ 2 ZjSj Zd IW k is N -bad for A."; ; / : Definition 5.4.16. We say that x y if there is an N 2 -chain fkq gqD0;:::;L in SN connecting x to y, namely k0 D x, kL D y. This equivalence relation induces a partition of the N -bad sites of A."; ; /, in disjoint equivalent classes [˛ ˛ , satisfying, by Lemma 5.4.15, d.˛ ; ˇ / > N 2 ;
diam.˛ / N 2 N .3C˛/C2 D N C1
(5.4.48)
with C1 WD C1 .d; jSj; 0 / WD 2 C .3 C ˛/C2 . This proves (5.4.6).
5.5 Definition of the sets ƒ."I ; Xr;/ z appearing in the statement of Proposition In order to define the sets ƒ."I ; Xr; / ƒ 5.1.5, we first fix the values of some constants. 1. Choice of . First we fix satisfying ˚
> max ? ; 9d C 8jSj C 5s0 C 5 ;
(5.5.1)
where the constant ? is defined in (5.4.39). Thus (5.5.1) implies hypothesis (ii) of Proposition 5.4.5 about the separation properties of the bad sites. The second condition on arises in the measure estimates of Section 5.8 (see Lemma 5.8.16). Moreover the condition (5.5.1) on is also used in the proof of Proposition 5.6.1, see (5.6.47). 2. Choice of . N Then we choose a constant N such that & N > C1 ;
N > C s0 C d ;
(5.5.2)
where & WD 1=10 is fixed as in (5.1.16) and the constant C1 2 is defined in Proposition 5.4.5. The constant N is the exponent that enters in the definition in N (5.5.11) of the scales NkC1 D ŒNk along the multiscale analysis. Note that the first inequality in (5.5.2) is condition (5.3.5) in the multiscale step Proposition 5.3.4. The second condition on N arises in the measure estimates of Section 5.8 (see Lemma 5.8.13).
5.5 Definition of the sets ƒ."I ; Xr; /
135
3. Choice of 0 . Subsequently we choose 0 large enough so that the inequalities (5.3.3) and (5.3.4) hold for all 2 Œ; N N 2 . This is used in the multiscale argument in the proof of Proposition 5.7.6. We also take
0 > Q 0 C .jSj=2/ ;
(5.5.3)
where Q 0 > is the constant provided by Lemma 5.6.5 associated to Q D . 4. Choice of s1 . Finally we choose the Sobolev index s1 large enough so that (5.3.6) holds for all 2 Œ; N N 2 . 2 We define the set of L -.N; /-good/bad parameters. Definition 5.5.1 (L2 -.N; /-good/bad parameters). Given N 2 N, 2 .0; 1, let n o 1=2 1 0 1=2 BN .j0 I ; / WD 2 RW Dm AN;j0 ."; ; /Dm > N (5.5.4) 0 n 1=2 1=2 AN;j0 ."; ; /Dm D 2 RW 9 an eigenvalue of Dm o with modulus less than 1 N ; where k k0 is the operatorial L2 -norm, and define the set of L2 -.N; /-good parameters ( [ 0 0 z 8j0 2 Zd ; BN GN; WD 2 ƒW .j0 I ; / Iq ; (5.5.5) qD1;:::;N 2d CjSjC4C30
)
where Iq are intervals with measure jIq j N : Otherwise we say that is L2 -.N; /-bad. Given N 2 N, 2 .0; 1, we also define o n 1=2 1 1=2 z Dm : G0N; WD 2 ƒW AN ."; /Dm N 0
(5.5.6)
0 Note that the sets GN; , G0N; are increasing in , namely
< 0
H)
0 0 GN; GN; 0;
G0N; G0N; 0 :
We also define the set ˚ GQ WD 2 ƒW ! D 1 C "2 !N " satisfies .NR/2 ;2
(5.5.7)
(5.5.8)
(recall Definition 5.1.4) with 2 WD
1 0 D ; 2 4
and 1 ; 1 defined in (1.2.28).
2 WD
jSj.jSj 1/ C 2. 1 C 2/ 2
(5.5.9)
136
5 Multiscale Analysis
Fix N0 WD N0 ."/ such that 1 "2 N0 Cs0 Cd 2 and define the increasing sequence of scales h ki N Nk D N0 ;
k 0:
(5.5.10)
(5.5.11)
Remark 5.5.2. Condition (5.5.10) is used in Lemma 5.7.2, see (5.7.9), and in the proof of Proposition 5.6.1, see (5.6.47). The first inequality "2 N0 Cs0 Cd in (5.5.10) is also used in Lemma 5.8.6. Finally we define, for 2 Œ1=2; 1, the sets \ \ \ 0 GN G0N; GQ ; ƒ."I ; Xr; / WD k ; k1
(5.5.12)
N N02
0 where GN; is defined in (5.5.5), the set G0N; is defined in (5.5.6), and GQ in (5.5.8). z appearing in the statement of Proposition 5.1.5. These are the sets ƒ."I ; Xr; / ƒ By (5.5.7) these sets clearly satisfy the property 1 listed in Proposition 5.1.5. We shall prove the measure properties 2 and 3 in Section 5.8.
Remark 5.5.3. The second intersection in (5.5.12) is restricted to the indices N N02 for definiteness: we could have set N N0˛. / for some exponent ˛. /, which increases linearly with . Indeed, the right invertibility properties of ˘N ŒLr;jH2N at the scales N N02 are deduced in Section 5.6 by the unperturbed Melnikov nonresonance conditions (1.2.7), (1.2.16), (1.2.17), (5.1.5), and a perturbative argument ˛. / that holds for N N0 with ˛. / linear in , see (5.6.47).
2 N 5.6 Right inverse of ŒLr; 2N N for N N < N0
The goal of this section is to prove the following proposition, which implies item 1 of Proposition 5.1.5 with N."/ WD N02 and N0 D N0 ."/ satisfying (5.5.10). Proposition 5.6.1. There are NN and "0 > 0 such that, for all " 2 .0; "0 /, for all NN N < N02 ."/, the operator ŒLr;2N N D ˘N ŒLr; jH2N ; z has a right inverse ŒLr;2N 1 W HN ! H2N which is defined for all 2 ƒ, N satisfying, for all s s0 , ˇ !1 ˇˇ ˇ ˇ ˇ ŒLr; 2N 0 N ˇ ˇ C.s/N 0 C1 N &s C jrjLip;C;s ; (5.6.1) ˇ ˇ 1 C "2 ˇ ˇ Lip;s
where 00 is a constant depending only on 0 . Moreover (5.1.21) holds.
2 N 5.6 Right inverse of ŒLr; 2N N for N N < N0
137
The proof of Proposition 5.6.1 is given in the rest of this section. We decompose the operator Lr; in (5.1.9), with Xr; defined in (5.1.6) and (5.1.7), as Lr; D Ld C ."; ; '/ (5.6.2) with
Ld WD J N @' C DV C k J …? S[F C c…S
."; ; '/ WD J.! / N @' C .
k /J …? S[F
C r."; ; '/ :
(5.6.3) (5.6.4)
Note that the operator Ld is independent of ."; / and ', and is small in " since ! D 1 C "2 !N " D N C "2 C N C "2 D N C O "2 ; j!jlip D O "2 ; (5.6.5) where N is the vector defined in (1.2.3) (see (1.2.25)), D k C O."2 /
for some k 2 F;
jjlip D O."2 / ;
(5.6.6)
by item 2 of Definition 5.1.2, and jrjLip;C;s1 D O."2 / by item 1 of Definition 5.1.2. In order to prove Proposition 5.6.1 we shall first find a right inverse of ŒLd 2N N , thanks to the nonresonance conditions (1.2.7), (1.2.16), (1.2.17) and (5.1.5), and we shall prove that it has off-diagonal decay estimates, see (5.6.43). Then we shall deduce the existence of a right inverse for ŒLr; 2N N by a perturbative Neumann series argument. Note that the space ei`' H, where H is defined in (5.1.1), is invariant under Ld and 1=2 1=2 i`' Dm L d Dm (5.6.7) e h D ei`' .M` h/; 8h D h.x/ 2 H ; where M` is the operator, acting on functions h.x/ 2 H of the space variable x only, defined by 1=2 1=2 M` WD Dm (5.6.8) iN ` J C DV C k J …? S[F C c…S Dm : Lemma 5.6.2 (Invertibility of M` ). For all ` 2 ZjSj , j`j < N02 , the self-adjoint operator M` of H is invertible and 1 M . 1 h`i0 : (5.6.9) 0 ` 0 1=2 1=2 M`0 Dm where Proof. We write M` D Dm
M`0 WD iN `J C DV C k J …? S[F C c…S ;
(5.6.10)
and we recall that, in case (i) of (5.1.1), M` acts on H , and we have set J D 0 (see (5.1.6)), while in case (ii), M` acts on H H and J .h1 ; h2 ; h3 ; h4 / D .h4 ; h3 ; h2 ; h1 /, by (5.1.8), (5.1.3), and (5.1.4).
138
5 Multiscale Analysis
In case (i) of (5.1.1), M`0 is represented, in the Hilbert basis ..‰j ; 0/; .0; ‰j //j 2N 0 of H D H , by the block-diagonal matrix Diagj 2N M`;j where 0 M`;j
!
j
j C cıS WD iN `
iN ` j C cıSj
( j ıS
and
WD
1 0
if j 2 S if j … S :
0 are The eigenvalues of M`;j
(
˙N ` C j C c if j 2 S if j … S : ˙N ` C j
In case (ii) of (5.1.1), M`0 is represented, in the Hilbert basis
‰j ‰j ‰j ‰j ‰j ‰j ‰j ‰j 0; p ; p ; 0 ; p ; 0; 0; p ; p ; 0; 0; p ; 0; p ; p ; 0 2 2 2 2 2 2 2 2 j 2N 0 of H D H H , by the block-diagonal matrix Diagj 2N M`;j , where 0 WD M`;j 0
B B @
j j C cıSj k ı.S[F/ c
i N `
i N ` j
j
j C cıS k ı.S[F/c
0
0
0
0
j C cıSj
0
0
0
0
j C k ı.S[F/ c
i N `
1 C C A
i N ` j
j
j C cıS C k ı.S[F/c
0 and, in this case, the eigenvalues of M`;j are
8 ˆ s0 C 0 C 2 in (5.6.20), we deduce by (5.6.21) that for N NN large enough, 8h 2 HN , kM hk0 kM` hk0 M .M` /jH h `;N
`;N
0
0 cN khk0 C.s/N & 0 N 0 khk0
0
N
0 2
khk0
and therefore 2 h; h 0 D M`;N h0 & 02 N 20 khk20 : M`;N M`;N is invertible and (5.6.17) Since HN is of finite dimension, we conclude that M`;N M`;N holds (we do not track anymore the dependence with respect to the constant 0 ). 1 defined in (5.6.18) is a right inverse of M`;N . Moreover, for It is clear that M`;N all h 2 HN , 1 2 M h D M`;N M M`;N M 1 h; M`;N M 1 h `;N `;N `;N `;N 0 0 1 D h; M`;N M`;N h 0 1 M`;N M`;N khk20 0
1 and (5.6.17) implies that kM`;N k0 . N 0 . The second inequality of (5.6.19) is an obvious consequence of the latter.
Our aim is now to obtain upper bounds of the j js -norms, for s s0 , of the right 1 inverse operator M`;N defined in (5.6.18). We write M`;N D …N .M` /jH2N D …N M` …N C …N M` …? N jH D DN C R1 C R2 ;
2N
(5.6.22)
2 N 5.6 Right inverse of ŒLr; 2N N for N N < N0
141
where, recalling (5.6.14),
DN WD …N DjHN ;
1=2 1=2 D WD Dm ; iN `J C Dm C k J Dm
(5.6.23)
1=2 N 1=2 R1 WD …N .Dm RDm …N /jH2N 1=2 N 1=2 1=2 N 1=2 ? R2 WD …N .Dm RDm /jH? \H2N D …N .Dm RDm …N /jH2N : N
By (5.6.16) we have jR1 js C.s/;
jR2 js C.s/;
8s s0 :
(5.6.24)
We decompose accordingly the adjoint operator as M`;N D …2N .M` /jHN D …N .M` /jHN C …? N …2N .M` /jHN
D DN C R1 C R2 : satisfies the estimate By (5.6.23), (4.3.13), and (5.6.24), M`;N ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇM ˇ ˇDN ˇ C ˇR ˇ C ˇR ˇ CN 2 C C.s/ .s N 2 : 1 s 2 s `;N s s
(5.6.25)
(5.6.26)
1 defined in (5.6.18), satisfies, by (4.3.6) and (5.6.26), Thus the right inverse M`;N
ˇ ˇ ˇ ˇ ˇ ˇ ˇ 1 ˇ ˇ ˇ ˇM ˇ .s ˇM ˇ ˇ M`;N M 1 ˇ C ˇM ˇ ˇ M`;N M 1 ˇ `;N s `;N s0 `;N `;N s `;N s s0 ˇ 1 ˇ 2ˇ ˇ : .s N M`;N M`;N (5.6.27) s /1 js by a multiscale argument. In Lemma 5.6.6 below we shall bound j.M`;N M`;N Without any loss of generality, we consider the case (ii) of (5.1.1). We first give a general result that is a reformulation of the multiscale step Proposition 4.1 of [24] with stronger assumptions. Here I WD f1; 2; 3; 4g.
Lemma 5.6.5. Given & 2 .0; 1=2/, C1 2, Q > 0, there are Q 0 > Q (depending only on Q and d ), s > s0 , > 0, NQ 1 with the following property. For any N NQ , z N , for any A D D C R 2 MEz with D for any finite Ez Zd I with diam.E/ z E diagonal, assume that (i) (L2 -bound) kA1 k0 N Q , (ii) (Off-diagonal decay) jRjs , (iii) (Separation properties of the singular sites) There is a partition of the sinz jDii j < 1=4g [˛ ˛ with gular sites WD fi 2 EW diam.˛ / N C1 &=.C1 C1/ ; Then
d.˛ ; ˇ / N 2&=.C1 C1/ ;
ˇ 1 ˇ ˇA ˇ C.s/N Q 0 N &s C jRjs ; s
8s s0 :
8˛ ¤ ˇ :
(5.6.28)
142
5 Multiscale Analysis
As usual, what is interesting in bound (5.6.28) is its “tamed” dependence with respect to N (& < 1); the constant C.s/ may grow very strongly with s, but it does not depend on N . Lemma 5.6.6. There exist t0 , depending only on 0 , and NN such that for all NN N < N02 , j`j N , we have that, 8s s0 , ˇ 1 ˇˇ ˇ (5.6.29) ˇ C.s/N t0C&s : ˇ M`;N M`;N s
Proof. We identify the operator DN in (5.6.23) with the diagonal matrix Diagj 2ŒN;N d Djj ; where each Djj is in L.C4 / (we are in case (ii) of (5.1.1)). Using the unitary basis j
of C4 defined in (5.2.10), we identify each Dj with the 4 4 diagonal matrix (see (5.2.11)) .j;a/ Djj D Diag D.j;a/ ; a2f1;2;3;4g .j;a/ D.j;a/ WD hj im hj im C 1 .a/k C 2 .a/ N ` (5.6.30) with signs 1 .a/; 2.a/ defined as in (5.4.7). N D k , D 0 Note that the singular sites of M` are those in (5.4.8) with ! D , and ` fixed. N By Lemma 5.4.8, which we can apply with K D 4 (` being fixed), for .‚/, C2 every -chain of singular sites for M` is of length smaller than .4/ . Let us take C1 D 2C2 C 3;
D N 2=
with
D & 1 .C1 C 1/ :
(5.6.31)
As a consequence, arguing as at the end of Section 5.4, we deduce that for N large enough (depending on ‚), the set of the singular sites for M`;N can be partitioned as D [˛ ˛ ; with diam.˛ / .4/C2 N C1 = ;
d.˛ ; ˇ / > N 2= :
(5.6.32)
We now multiply M`;N M`;N
(5.6.22);(5.6.25)
D
2 DN C DN R1 C R1 DN C R1 R1 C R2 R2
(5.6.33)
(notice that DN R2 , R2 DN , R1 R2 , R2 R1 are zero) on both sides by the diagonal matrix 1 1 d‚;N WD jDN j C ‚IdN where (5.6.34) j;a jDN j WD Diagjj jN;a2I jDj;a j jj jN ; IdN WD …N IdjHN : We write
1 1 z N C %‚;N P`;N WD d‚;N M`;N M`;N d‚;N DD
(5.6.35)
2 N 5.6 Right inverse of ŒLr; 2N N for N N < N0
143
where, by (5.6.33), z N WD d 1 D 2 d 1 ; D ‚;N N ‚;N 1 1 DN R1 C R1 DN C R1 R1 C R2 R2 d‚;N : %‚;N WD d‚;N
(5.6.36)
We apply the multiscale Lemma 5.6.5 to P`;N with C1 defined in (5.6.31) and Q WD 2 0 C 5. Let us verify its assumptions. The operator P`;N defined in (5.6.35) is (Lemma 5.6.4) and, by the definition of d‚;N in (5.6.34) invertible as M`;N M`;N 2 and (5.6.30), for N0 > N NN large enough (depending on ‚), for all j`j N , and using (5.6.17), we obtain 1 1 k0 D d‚;N M`;N M`;N d‚;N kP`;N 0 1 4 . N M`;N M`;N 0
N 20C5 D N Q
(5.6.37)
by the definition of Q WD 2 0 C 5. Thus Assumption (i) of Lemma 5.6.5 holds. Then we estimate the j js -decay norm of the operator %‚;N in (5.6.36). By 1 1 (4.3.20), (5.6.24), and jd‚;N js ‚1 , jd‚;N DN js 1, which directly follow by the definition (5.6.34), we get, for all s s0 , ˇ ˇ 1 ˇ ˇ 1 ˇ ˇ ˇ C ˇd 1 ˇ2 jR1 js jR1 js C jR2 js jR2 js DN ˇs jR1 js ˇd‚;N j%‚;N js .s ˇd‚;N 0 0 ‚;N s s .s ‚1 :
(5.6.38)
In particular, provided that ‚ has been chosen large enough (depending on &; C1 ; 0 ), Assumption ii) of Lemma 5.6.5 is satisfied. z N in To check Assumption iii) it is enough to note that, by the definition of D 1 d (5.6.36), and of d‚;N in (5.6.34), for all i 2 ŒN; N I, we have i 2 z i D jDi j ; D i 2 jDii j C ‚
ˇ ˇ ˇ iˇ z ˇ 1=4 : and ˇDii ˇ ‚ ” ˇD i
As a consequence, the separation properties for the singular sites of M`;N proved in (5.6.32) (with D & 1 .C1 C 1/), imply that Assumption iii) of Lemma 5.6.5 is satisfied, provided that N is large enough (depending only on ‚). Lemma 5.6.5 implies that there is t0 , depending only on 0 , such that (see (5.6.28)) ˇ 1 ˇ ˇP ˇ .s N t0 N &s C j%‚;N js ; `;N s 1 which gives, using (5.6.35) and (4.3.20), jd‚;N js 1, ˇ ˇ ˇ ˇ ˇ 1 ˇ2 ˇˇ 1 ˇˇ 1 ˇ ˇ 1 1 1 ˇ ˇ ˇ ˇP ˇ P`;N d‚;N ˇ .s ˇd‚;N ˇ D ˇd‚;N ˇ M`;N M`;N `;N s s s &s ˇ s ˇ t0 .s N N C ˇ%‚;N ˇs :
Finally, estimate (5.6.29) follows by (5.6.39) and (5.6.38).
(5.6.39)
144
5 Multiscale Analysis
Proof of Proposition 5.6.1 concluded. Recalling the decomposition (5.6.2), the first goal is to define a right inverse of the operator ˇ 1=2 1=2 ˇ Ld;N W H2N ! HN ; Ld;N WD ˘N Dm L d Dm ; (5.6.40) H 2N
where Ld is defined in (5.6.3). Recalling (5.6.7) and (5.6.13) and Lemma 5.6.4, the linear operator L1 d;N W HN ! H2N defined by i`' 1 L1 g D ei`' M`;N .g/; d;N e
8` 2 ŒN; N jSj ; 8g 2 HN ;
is a right inverse of Ld;N . Using that L1 d;N is diagonal in time it results ˇ 1 ˇ ˇ ˇ ˇL ˇ N jSj=2 max ˇM 1 ˇ : d;N s `;N s
(5.6.41)
By (5.6.27) and Lemma 5.6.6, we have the bound ˇ 1 ˇ ˇM ˇ .s N t0 C&sC2 ; 8s s0 : `;N s
(5.6.42)
j`jN
Therefore, by (5.6.41) and (5.6.42) and the second estimate in (5.6.19), we get ˇ 1 ˇ ˇL ˇ .s N jSj 2 Ct0 C&sC2 ; 8s s0 ; d;N s ˇ 1 ˇ b ˇL ˇ . N 2 C0 Cs0 ; b D d C jSj : d;N s 0
(5.6.43)
Finally we use a perturbative Neumann series argument to prove the existence of a right inverse of 2N 1=2 1=2 LC Lr; N Dm r;;N WD Dm
(5.1.19)
D
(5.6.2);(5.6.40)
D
Here
1=2 1=2 ˘N Dm Lr; ."; /Dm jH2N
Ld;N C N :
1=2 1=2
."; ; '/Dm
N WD ˘N Dm jH2N
(5.6.44) (5.6.45)
satisfies, by (5.6.4), (5.6.5), (5.6.6), Lemma 4.3.8, and item 1 of Definition 5.1.2, j N jLip;s .s "2 N 2 C jrjLip;C;s ;
j N jLip;s1 .s1 "2 N 2 :
(5.6.46)
Using the second estimates in (5.6.43) and (5.6.46), N N02 , (5.5.10) and (5.5.1), we get ˇ 1 ˇ ˇL ˇ j N jLip;s .s N b2 C0 Cs0 C2 "2 N bC20 C2s0 C4 "2 1 : (5.6.47) d;N s 0 1 0 0
C Hence IdHN C N L1 d;N is invertible and the operator Lr;;N in (5.6.44) has the right inverse 1 1=2 C 1 1=2 1 1 Lr;;N D Dm Dm WD L1 ; (5.6.48) ŒLr; 2N d;N IdHN C N Ld;N N
2 N 5.6 Right inverse of ŒLr; 2N N for N N < N0
145
which satisfies the following tame estimates (see Lemma 4.3.12) for all s s0 , ˇ ˇ ˇ 1 ˇ2 ˇ ˇ ˇ C 1 ˇ ˇ ˇ ˇ .s ˇL1 (5.6.49) ˇ Lr;;N ˇ d;N Lip;s C Ld;N Lip;s j N jLip;s : 0
Lip;s
1 1 Since L1 d;N does not depend on , jLd;N jLip;s D jLd;N js , and (5.6.48), (5.6.49), (5.6.43), and (5.6.46) imply ˇ 1 1=2 ˇˇ 0 ˇ 1=2 Dm ˇ .s N 0 N &s C jrjLip;C;s ŒLr;2N ˇDm N Lip;s
where ˚
00 WD max .jSj=2/ C t0 C 2; b C 2s0 C 2 0 C 4 : Using (4.3.28), the estimate (5.6.1) follows. Let us now prove (5.1.21). Calling 0 the operator defined in (5.6.4) associated to 0 .0 ; r 0 / we have 0 D . 0 /J …? S[F C r r and, by (5.6.48), 1 1=2 1=2 2N 1 0 ;0 Dm Dm Œ L ŒLr;2N r N N h 1 i 1 0 D L1 IdHN C N L1 IdHN C N L1 d;N d;N d;N 1 1 0 0 1 1 . N N /L1 D L1 d;N IdHN C N Ld;N d;N IdHN C N Ld;N 1 0 1 D LC : (5.6.50)
N N LC r;;N r 0 ;0 ;N In conclusion, by (5.6.50) (4.3.20), (5.6.49), and jL1 d;N js0 j N jLip;s1 1, we get ˇ 1=2 ˇˇ ˇ 1=2 1 2N 1 0 ;0 / .Œ L / (5.6.51) .ŒLr; 2N Dm ˇ ˇDm r N N s1 ˇ ˇ2 ˇ ˇ 0 ˇ ˇ ˇ .s1 ˇL1 d;N s1 N N s1 0 .s1 N 2.0 C&s1 / j 0 jN 2 C jr r 0 jC;s1 which, using (4.3.28), implies (5.1.21). This completes the proof of Proposition 5.6.1. Remark 5.6.7. In this section we have proved directly the Lipschitz estimate of 1 instead of arguing as in Proposition 5.7.6, because for a right inverse .ŒLr;2N N / we do not have the formula (5.7.23).
146
5 Multiscale Analysis
5.7 Inverse of Lr;;N for N N02 In Proposition 5.6.1 we proved item 1 of Proposition 5.1.5. The aim of this section is to prove item 2 of Proposition 5.1.5, namely that for all N N02 and 2 ƒ."I 1; Xr; / the operator Lr;;N defined in (5.1.22) is invertible and its inverse L1 r;;N has off-diagonal decay, see Proposition 5.7.6. The proof is based on inductive applications of the multiscale step Proposition 5.3.4, thanks to Proposition 5.4.5 about the separation properties of the bad sites.
5.7.1 The set ƒ."I 1; Xr; / is good at any scale We first prove the following proposition. Proposition 5.7.1. The ƒ."I 1; Xr; / in (5.5.12) satisfies \ \ GN GNk ƒ."I 1; Xr; / N N0
(5.7.1)
k1
where the sets GN are defined in (5.4.5). We first consider the small scales N N0 . M ; / be the matrix in (5.2.14) corresponding to r D 0 (see Lemma 5.7.2. Let A."; 1 Remark 5.2.1). There is NN such that for all NN N N0 .2"2 / Cs0 Cd (see z 8j0 2 Zd , 2 R, (5.5.10)), 8 2 ƒ, 1=2 1 1=2 D N H) AMN;j0 ."; ; /Dm m 0 (5.7.2) ˇ 1=2 1 ˇ 1=2 ˇ 0 C&s ˇD AN;j0 ."; ; /Dm N ; 8s 2 Œs ; s ; 0 1 m s namely the matrix AN;j0 ."; ; / is N -good according to Definition 5.3.1. Proof. For brevity, the dependence of the operators with respect to ."; / is kept implicit. Step 1. We first prove that there is NN such that 8N NN , 8j0 2 Zd , 8 2 R, 1=2 1 1=2 D N H) AMN;j0 . /Dm m 0 (5.7.3) ˇ ˇ 1=2 1 0 1=2 ˇ ˇD C.s/N 1 C&s ; 8s s0 ; AMN;j0 . /Dm m s where 10 WD Q 0 C .jSj=2/ and Q 0 > is the constant provided by Lemma 5.6.5 associated to Q D . M / represents the L2 self-adjoint operator L0; . / deRecall that the matrix A. fined in (5.2.17), which is independent of '. For all ` 2 ZjSj , h 2 H, we have that 1=2 1=2 i`' L0; . /Dm .e h/ D ei`' M` . /h; Dm
M` . / D D` . / C T ;
5.7 Inverse of Lr;;N for N N02
147
where 2 D` . / WD i.! ` C /JDm C Dm C J Dm ; 1=2 1=2 T WD Dm DV Dm J …S[F C c…S Dm :
Note that by (4.4.1), Lemma 4.3.8, and since D O.1/ (item 2 of Definition 5.1.2), it results jT js C.s/; 8s s0 : (5.7.4) In order to prove (5.7.3), since ˇ ˇ ˇ ˇ 1=2 1 1=2 ˇ jSj=2 ˇ.M` . //1 ˇ ; ˇD N max AMN;j0 . /Dm m N;j s 0 s j`jN
(5.7.5)
it is sufficient to bound the j js -norms of .M` . //1 N;j0 . We apply the multiscale Lemma 5.6.5. j We identify as usual the operator D` . / with the diagonal matrix Diagj ..D` . //j / where .j;a/ /a2f1;2;3;4g ; .D` . //jj D Diag.ŒD` . /.j;a/ .j;a/ D` . / .j;a/ WD hj im hj im C 1 .a/ C 2 .a/.! ` C /
with signs 1 .a/; 2.a/ defined as in (5.4.7). Note that the singular sites of D` . / are those in (5.4.8) with ` fixed. By Lemma 5.4.8, which we use with K D 4, ` N being fixed, there is C2 (independent of ‚) such that, for > .‚/, any -chain of C2 singular sites has length L .4/ . As in (5.6.31), we can apply this result with C1 D 2C2 C 3, D & 1 .C1 C 1/ and D N 2= (for any N NN large enough), and we find that the operator ‚1 .M` . //N;j0 satisfies Assumption iii) of Lemma 5.6.5, where we take Q D . Assumption ii) is also satisfied, provided that ‚ has been chosen large enough, more precisely ‚ 1 C.s /, with the constant C.s / of (5.7.4). By Lemma 5.6.5, there is Q 0 > such that 8N NN , 8j0 2 Zd , 8 2 R, 8` 2 ZjSj , ˇ ˇ .M` . //1 N H) 8s s0 ; ˇ.M` . //1 ˇ .s N Q 0 N &s C jT js : N;j0 0 N;j0 s (5.7.6) Since 1=2 1 1=2 D D max .M` . //1 AMN;j0 . /Dm m N;j0 0 ; 0 j`jN
the premise in (5.7.3) implies the premise in (5.7.6) and therefore (5.7.5), (5.7.6), and (5.7.4) imply (5.7.3) since 10 WD Q 0 C jSj=2. Step 2. We now apply a perturbative argument to the operator 1=2 1=2 1=2 M 1=2 AN;j0 . /Dm D Dm C N;j0 ; AN;j0 . /Dm Dm
(5.7.7)
148
5 Multiscale Analysis
1=2 1=2 where is the matrix that represents Dm rDm . By item 1 of Definition 5.1.2,
j N;j0 js1 jrjC;s1 C1 "2 :
(5.7.8)
1=2 M1 1=2 k0 N then If kDm AN;j0 . /Dm
ˇ ˇ ˇ ˇ 1=2 1 1=2 ˇ ˇ ˇ . N Cs0 C.d=2/ "2 . N Cs0 C.d=2/ "2 1 ˇD AMN;j0 . /Dm
m 0 s0 N;j0 s1 (5.7.9) 1
since N0 .2"2 / Cs0 Cd (see (5.5.10)). Then Lemma 4.3.12 implies that 1=2 1=2 AN;j0 . /Dm in (5.7.7) is invertible, and 8s 2 Œs0 ; s1 Dm ˇ 1=2 1 ˇ 1=2 ˇ ˇD AN;j0 . /Dm m s ˇ 1=2 1 ˇ 1=2 1 ˇ ˇ 1=2 ˇ 1=2 ˇ2 . s1 ˇDm AMN;j0 . /Dm C ˇDm j N;j0 js1 AMN;j0 . /Dm s s (5.7.9)
.s1
ˇ 1=2 1 ˇ (5.7.3) 0 C&s 0 1=2 ˇ ˇD .s1 N 1 N C&s ; AMN;j0 . /Dm m s
because 0 > 10 D Q 0 C .jSj=2/ (see (5.5.3)) and for NN large enough.
0
z is N -good. At small scales N N0 , any 2 ƒ Lemma 5.7.3 (Initialization). For all N N0 and " small, the set GN defined in z (5.4.5) is GN D ƒ. z 8j0 2 Zd , the set BN .j0 I / defined in Proof. Lemma 5.7.2 implies that 8 2 ƒ, (5.4.3) satisfies n o 1=2 1 0 1=2 BN .j0 I / BM N WD 2 RW Dm > N =2 : (5.7.10) AMN;j0 ."; ; /Dm 0 0 z it is sufficient to show that the set BM N in Thus, in order to prove that GN D ƒ, 1=2 1=4 (5.7.10) satisfies the complexity bound (5.4.4). Note that since kDm k0 m , we have n o p 0 BM N 2 RW AM1 m=2 (5.7.11) N;j0 ."; ; / 0 > CN ; C WD o n D 2 RW 9 an eigenvalue of AMN;j0 ."; ; / with modulus less than N =C :
Let …N;j0 denote the L2 projector on the subspace ) ( X ij x : .qj ; pj /e HN;j0 WD .q.x/; p.x// D jj j0 jN
5.7 Inverse of Lr;;N for N N02
149
Since AMN;j0 ."; ; / represents the operator L0; . / in (5.2.17), which does not depend on ' (see Remark 5.2.1), the spectrum of AMN;j0 ."; ; / is formed by j D 1; : : : ; .2N C 1/d ; ` 2 ZjSj ; ˇj eigenvalue of …N;j0 DV C J …? S[F C c…S …N;j0 ; ˙.! ` C / ˇj ;
and, by (5.7.11), we have 0 BM N
R`;j
[
R`;j ;
j`jN;j D1;:::;.2N C1/d ;D˙
˚
WD 2 RW j . C ! `/ ˇj j N =C :
(5.7.12)
0 is included in the union of N jSjCd C1 intervals Iq of length 2N =C . It follows that BM N 0 is included in the union of By eventually dividing the intervals Iq we deduce that BM N d C2CjSj intervals Iq of length N . N
Lemma 5.7.4. For all k 0 we have \ \ 0 GNk GN GQ GNkC1 ; ;1 kC1
(5.7.13)
0 in (5.5.5), and GQ in (5.5.8). where the set GN is defined in (5.4.5), GN; 0 Proof. Let 2 GNk \ GN \ GQ . In order to prove that 2 GNkC1 (Definition kC1 ;1 0 5.4.4), since 2 GNkC1 ;1 (set defined in (5.5.5)), it is sufficient to prove that the sets 0 BNkC1 .j0 I / in (5.4.4) and BN .j0 I ; 1/ in (5.5.4) satisfy: kC1
8j0 2 Zd ;
0 BNkC1 .j0 I / BN .j0 I ; 1/ ; kC1
or equivalently, that 1=2 1 1=2 ANkC1 ;j0 ."; ; /Dm k0 NkC1 kDm 1=2 1 1=2 ANkC1 ;j0 ."; ; /Dm js jDm
0 C&s NkC1 ;
H) (5.7.14) 8s 2 Œs0 ; s1 :
We prove (5.7.14) applying the multiscale step Proposition 5.3.4 to the matrix ANkC1 ;j0 . By (5.2.13) the assumption (H1) holds. The assumption (H2) is the premise in (5.7.14). Let us verify (H3). By Remark 5.4.3, a site k 2 E WD .0; j0 / C ŒNnC1 ; NnC1 b I ; (5.7.15) which is Nk -good for A."; ; / WD Lr; C Y (see Definition 5.4.2 with A D A."; ; /) is also .ANnC1 ;j0 ."; ; /; Nk /-good
150
5 Multiscale Analysis
(see Definition 5.3.3 with A D ANnC1 ;j0 ."; ; /). As a consequence we have the inclusion n o .ANnC1 ;j0 ."; ; /; Nk /-bad sites n o (5.7.16) Nk -bad sites of A."; ; / with j`j NkC1 and (H3) is proved if the latter Nk -bad sites (in the right hand side of (5.7.16)) are contained in a disjoint union [˛ ˛ of clusters satisfying (5.3.7) (with N D Nk ). This is a consequence of Proposition 5.4.5 applied to the infinite-dimensional matrix A."; ; /. Since 2 GNk then assumption (i) of Proposition 5.4.5 holds with N D Nk . Assumption (ii) holds by (5.5.1). Assumption (iii) of Proposition 5.4.5 holds because 2 GQ , see (5.5.8). Therefore the Nk -bad sites of A."; ; / satisfy (5.4.6) with N D Nk , and therefore (H3) holds. Then the multiscale step Proposition 5.3.4 applied to the matrix ANkC1 ;j0 ."; ; / implies that if 1=2 1 1=2 D ANkC1 ;j0 ."; ; /Dm NkC1 m 0 then ˇ
ˇ 1=2 1 ˇD A m
1=2 ˇ NkC1 ;j0 ."; ; /Dm s
1 0 &s NkC1 NkC1 C jT jC;s 4
(5.2.13)
0
C&s NkC1 ;
8s 2 Œs0 ; s1 ;
(5.7.17)
proving (5.7.14). Corollary 5.7.5. For all n 1 we have n \
0 GN k ;1
\
GQ GNn :
(5.7.18)
kD1
Proof. For n D 1 the inclusion (5.7.18) follows by (5.7.13) at k D 0 and the fact that z by Lemma 5.7.3. Then we argue by induction. Supposing that (5.7.18) GN0 D ƒ holds at the step n then nC1 \
0 GN k ;1
\
0 GQ D GN nC1 ;1
n \ \
kD1
0 GN k ;1
\ (5.7.18)n \ \ 0 GQ GN G GQ N n nC1 ;1
kD1 (5.7.13)n
GNnC1
proving (5.7.18) at the step n C 1. Proof of proposition 5.7.1 concluded. Corollary 5.7.5 implies that \ \ \ 0 GN GQ GNn : k ;1 k1
n1
(5.7.19)
5.7 Inverse of Lr;;N for N N02
151
Then we conclude that the set ƒ."I 1; Xr; / defined in (5.5.12) satisfies ƒ."I 1; Xr; / D
\
0 GN k ;1
k1
\
G0N;1
\
GQ
(5.7.19)
N N02
\
GNn
n1
z for all N N0 , by Lemma 5.7.3. proving (5.7.1), since GN D ƒ,
5.7.2 Inverse of Lr;;N for N N02 We can finally prove the following proposition. Proposition 5.7.6. For all N N02 , 2 ƒ."I 1; Xr; /, the operator Lr;;N defined in (5.1.22) is invertible and satisfies (5.1.23). Moreover (5.1.24) holds. Proof. Let 2 ƒ."I 1; Xr; /. For all N N02 there is M 2 N such that N D M , for some 2 Œ; N N 2 and 2 GM . In fact 1. If N N1 , then N 2 ŒNnC1 ; NnC2 for some n 2 N, and we have N D Nn for some 2 Œ; N N 2 . Moreover if 2 ƒ."I 1; Xr; / then 2 GNn by (5.7.1). 2. If N02 N < N1 , it is enough to write N D M N for some integer M < N0 . Moreover if 2 ƒ."I 1; Xr; / then 2 GM by (5.7.1). We now apply the multiscale step Proposition 5.3.4 to the matrix AN ."; / (which represents Lr;;N as stated in Remark 5.2.2), for N N02 , with E D ŒN; N b I N, N M . The assumptions (5.3.3)–(5.3.6) hold, for all 2 Œ; N N 2 , and N 0 0 by the choice of the constants , N , s1 at the beginning of Section 5.5. Assumption (H1) holds by (5.2.13). Assumption (H2) holds because 2 ƒ."I 1; Xr; / G0N;1 for N N02 , see (5.5.12). Moreover, arguing as in Lemma 5.7.4 – for the matrix A."; ; / with D 0, j0 D 0 – the hypothesis (H3) of Proposition 5.3.4 holds. Then the multiscale step Proposition 5.3.4 implies that, 8 2 ƒ."I 1; Xr; / GM \ G0N;1 , we have ˇ 1=2 1 1=2 ˇ (5.3.9) 0 ˇD AN Dm ˇs C.s/N N &s C "2 jrjC;s m (5.7.20) 0 C.s/N N &s C jrjC;s : We claim the following direct consequence: on the set ƒ."I 1; Xr; / we have ˇ ˇ
1 ˇ ˇ L 0 ˇ 1=2 r;;N 1=2 ˇ D C.s/N 2. C&s1 C1/ N &.ss1 / C jrjLip;C;s : ˇDm ˇ m 2 ˇ ˇ 1C" Lip;s
(5.7.21) For all in the set ƒ."I 1; Xr; /, the operator
Lr;;N 1 1=2 1=2 U."; / WD Dm Dm 1 C "2
152
5 Multiscale Analysis
satisfies, by (5.7.20) and jrjC;s1 C1 "2 (see item 1 of Definition 5.1.2), the estimates 0
jU."; /js C.s/N .N &s C jrjC;s /;
jU."; /js1 C.s1 /N
Moreover, for all 1 ; 2 2 ƒ."I 1; Xr; /, we write (using that of )
0 C&s 1
:
(5.7.22)
! is independent 1 C "2
U 1 .2 / U 1 .1 / U.2 / U.1 / D U.2 / U.1 / 2 1 2 1
1 Xr;;N .2 / Xr;;N .1 / 1=2 1=2 Dm D U.2 /Dm U.1 / 2 1 1 C "2 2 1 C "2 1 (5.7.23) where Xr;;N WD ˘N .Xr; /jHN . Decomposing
Xr;;N .2 / Xr;;N .1 / Xr;;N .2 / Xr;;N .1 / 1 D 2 2 2 1 1 C " 2 1 C " 1 .2 1 /.1 C "2 2 / "2 Xr;;N .1 / .1 C "2 2 /.1 C "2 1 / we deduce by (5.7.23) and (5.7.22) and D O.1/, jjlip D O."2 /, jrjC;s1 ; jrjlip;C;s1 C1 "2 , the estimates 0 jU jlip;s .s N 2. C&s1 C2/ N &.ss1 / C jrjC;s C jrjlip;C;s ; (5.7.24) 0 jU jlip;s1 .s1 N 2. C&s1 C1/ : Finally, (5.7.22) and (5.7.24) imply (5.7.21). The inequalities (4.3.28) and (5.7.21) imply (5.1.23). We finally prove (5.1.24). Denoting A0N the matrix that represents Lr 0 ;0 ;N as in 0 Remark 5.2.2, we have that AN A0N represents ˘N .. 0 /J …? S[F C r r /jHN . Then it is enough to write ˇ 1=2 ˇˇ ˇ 1=2 1 AN .A0N /1 Dm ˇ ˇDm s ˇ ˇ ˇ1 ˇ ˇ ˇ ˇ 1=2 1 1=2 ˇ ˇ ˇ 1=2 0 1 1=2 ˇ .s1 ˇDm AN Dm ˇ AN A0N ˇC;s ˇDm .AN / Dm ˇ 1 s1 s1 ˇ ˇ ˇ ˇ 0 2. C&s1 / ˇ 0ˇ 2 0 .s N N C ˇr r ˇ 1
C;s1
using (5.7.20) at s D s1 and the bounds jrjC;s1 ; jr 0 jC;s1 . "2 . This estimate and (4.3.28) imply (5.1.24).
5.8 Measure estimates
153
5.8 Measure estimates The aim of this section is to prove the measure estimates (5.1.17) and (5.1.18) in Proposition 5.1.5.
5.8.1 Preliminaries We first give several lemmas on basic properties of eigenvalues of self-adjoint matrices, which are a consequence of their variational characterization. Lemma 5.8.1. Let A./ be a family of self-adjoint matrices in ME E , E finite, defined z R, satisfying, for some ˇ > 0, for 2 ƒ d A./ ˇId (recall the notation (1.3.6)). We list the eigenvalues of A./ in nondecreasing order 1 ./ q ./ jE j ./ according to their variational characterization q ./ WD inf
max
F 2Fq y2F ;kyk0 D1
hA./y; yi0 ;
(5.8.1)
where Fq is the set of all subspaces F of CjE j of dimension q. Then d q ./ ˇ > 0;
8q D 1; : : : ; jEj :
(5.8.2)
z Proof. By the assumption d A./ ˇId, we have, for all 2 > 1 , 1 ; 2 2 ƒ, y 2 F , kyk0 D 1, that hA.2 /y; yi0 > hA.1 /y; yi0 C ˇ.2 1 / : Therefore max
y2F ;kyk0 D1
hA.2 /y; yi0
max
y2F ;kyk0 D1
hA.1 /y; yi0 C ˇ.2 1 /
and, by (5.8.1), for all q D 1; : : : ; jEj, q .2 / q .1 / C ˇ.2 1 / :
Hence d q ./ ˇ.
Lemma 5.8.2. i) Let A./ be a family of self-adjoint matrices in ME E , E finite, z R, satisfying, for some ˇ > 0, Lipschitz with respect to 2 ƒ d A./ ˇId :
154
5 Multiscale Analysis
Then there are intervals .Iq /1qjE j in R such that o n [ z A1 ./ ˛ 1 Iq ; jIq j 2˛ˇ 1 ; 2 ƒW 0
(5.8.3)
1qjE j
and in particular the Lebesgue measure ˇn oˇ ˇ z A1 ./ ˛ 1 ˇˇ 2jEj˛ˇ 1 : ˇ 2 ƒW 0
(5.8.4)
ii) Let A./ WD Z C W be a family of self-adjoint matrices in ME E , Lipschitz z R, with W invertible and with respect to 2 ƒ ˇ1 Id Z ˇ2 Id with ˇ1 > 0. Then there are intervals .Iq /1qjE j such that o n [ z A1 ./ ˛ 1 Iq ; 2 ƒW 0 jIq j
1qjE j
2˛ˇ2 ˇ11 W 1 0
:
(5.8.5)
Proof. Proof of i). Let .q .//1qjE j be the eigenvalues of A./ listed as in Lemma 5.8.1. We have n o o [ n z A1 ./ ˛ 1 D z q ./ 2 Œ˛; ˛ : 2 ƒW 2 ƒW 0
1qjE j
By (5.8.2) each
˚ z q ./ 2 Œ˛; ˛ Iq WD 2 ƒW
is included in an interval of length less than 2˛ˇ 1 . Proof of ii). Let U WD W 1 Z and consider the family of self-adjoint matrices Q A./ WD A./U D .Z C W /U D ZW 1 Z C Z : We have the inclusion o n n o z AQ1 ./ ˛kU k0 1 : z A1 ./ ˛ 1 2 ƒW 2 ƒW 0 0
(5.8.6)
Q Since d A./ ˇ1 Id we derive by item i) that n o z AQ1 ./ ˛kU k0 1 2 ƒW 0
(5.8.7)
[
Iq ;
1qjE j
where Iq are intervals with measure jIq j 2˛kU k0 ˇ11 2˛ W 1 0 kZk0 ˇ11 2˛ W 1 0 ˇ2 ˇ11 : Then (5.8.6), (5.8.7), and (5.8.8) imply (5.8.5).
(5.8.8)
155
5.8 Measure estimates
The variational characterization of the eigenvalues also implies the following lemma. Lemma 5.8.3. Let A, A1 be self-adjoint matrices ME E , E finite. Then their eigenvalues, ranked in nondecreasing order, satisfy the Lipschitz property ˇ ˇ ˇq .A/ q .A1 /ˇ A A1 ; 8q D 1; : : : ; jEj : 0 We finish this section stating a simple perturbative lemma, proved by a Neumann series argument. b Lemma 5.8.4. Let B; B 0 2 ME E with E WD ŒN; N I, and assume that 1=2 1 1=2 D B Dm 0 K1 ; B 0 B 0 ˛ : (5.8.9) m
If 4.N 2 C m/1=2 ˛K1 1, then 1=2 0 1 1=2 1=2 2 D .B / Dm 0 K1 C 4 N 2 C m ˛K1 : m
(5.8.10)
5.8.2 Measure estimate of ƒ n GQ We estimate the complementary set ƒ n GQ , where GQ is the set defined in (5.5.8). Lemma 5.8.5. jƒ n GQ j "2 . Proof. Since !N " WD !N satisfies (1.2.30) we have ˇ ˇ ˇ ˇ ˇ ˇ X ˇ ˇ ˇ 2 2 n C 1 C " p ! N ! N ˇ ij i j ˇ ˇn C ˇ ˇ ˇ 1i j jSj
X 1i j jSj
ˇ ˇ ˇ pij !N i !N j ˇ C jj"2 jpj ˇ
1 =2 1 C 0 "2 jpj 1 hpi hpi1 for all jpj
1 1 C1 . 2C 0 "2 1
Moreover, if n D 0 then, for all 2 ƒ, we have ˇ ˇ ˇ ˇ X 2 1 ˇ ˇ 2 2 pij !N i !N j ˇ 1 c"2 : ˇ 1C" ˇ ˇ hpi1 1i j jSj
As a consequence, recalling the definition of GQ in (5.5.8), Definition 5.1.4 and 2 WD 1 =2, we have
1C1 [ 1 1 Q ƒnG Rn;p ; N WD n ¤ 0; jpj > ; jnj C jpj ; 2C 0 "2 .n;p/2N
(5.8.11)
156
5 Multiscale Analysis
where
2 2 ; Rn;p WD 2 ƒW jfn;p ./j < hpi2 X n fn;p ./ WD pij !N i !N j : 2 C 1 C "2 1i j jSj
Since @ fn;p ./ WD
2n"2 , if jnj ¤ 0, we have jRn;p j . "2 2 hpi2 and .1 C "2 /3
by (5.8.11) jƒ n GQ j .0
X
jpj>
1 2C 0 "2
jpj
1 1 C1
Z C1 "2 1 2 . " 1C1 2 C 2 jSj.jSj1/ d 0 1 jpj2 1 0 2 2C "
.0 "
2.2 .jSj.jSj1/=2/21 / 1 C1
"2
for 2 large with respect to 1 , i.e., 2 defined as in (5.5.9), and " small.
z n G0 1 for N N 2 5.8.3 Measure estimate of ƒ 0 N; 2
z n G0 1 , where G0N; is defined in (5.5.6). We estimate the complementary set ƒ N; 2
Lemma 5.8.6. If N N02 then ˇ ˇ ˇƒ z n G0 1 ˇ "2 N Cd CjSjC1 N 2 C2d CjSjCs0 : N;
(5.8.12)
2
Proof. By (5.5.6) we have the inclusion n o z n G0 1 2 ƒW z PN1 ./ > N =4 ; ƒ 0 N; 2
where (see Remark 5.2.2) PN ./ WD
1=2 AN ."; / 1=2 Dm Dm 2
1C"
D
1=2 Dm ˘N
Xr; ."; / J !N " @' C 1 C "2
jHN
1=2 Dm :
By assumptionp3 of Definition 5.1.2, the matrix PN ./ satisfies d PN ./ c"2 with c WD c2 m. The first inequality in (5.8.12) follows by Lemma 5.8.2-i) (in particular (5.8.4)) applied to PN ./ with E D ŒN; N d CjSj I, ˛ D 4N , ˇ D c"2 , taking N large. The second inequality in (5.8.12) follows because, by (5.5.10), "2 N0 Cd Cs0 N for N N02 .
Cd Cs0 2
157
5.8 Measure estimates
z n G0 5.8.4 Measure estimate of ƒ N
1 k; 2
z n G0 We estimate the complementary set ƒ N
for k 1
1 k; 2
0 , where GN; is defined in (5.5.5).
0 0 z has measure Proposition 5.8.7. For all N N1 WD N0 N the set BN; 1 WD ƒ n G N; 21 2 ˇ ˇ ˇ 0 ˇ (5.8.13) ˇBN; 1 ˇ N 1 : 2
0 .j0 I ; 1=2/ defined in (5.5.4). We first obtain complexity estimates for the set BN N and jj0 j < 3.1 C jj/N N . We argue differently for jj0 j 3.1 C jj/N
z for all jj0 j 3.1 C jj/N Lemma 5.8.8. For all 2 ƒ, N , we have 0 .j0 I ; 1=2/ BN
CjSjC1 N d[
Iq ;
(5.8.14)
qD1
where jIq j are intervals with length jIq j N . Proof. Recalling (5.2.22), (5.2.14), and (5.2.15), we have 1=2 1=2 1=2 1=2 1=2 1=2 Dm AN;j0 ."; ; /Dm D Dm AN;j0 ."; /Dm C Dm YN;j0 Dm : (5.8.15)
N and N NN .V; d; jSj/ is large, then We claim that, if jj0 j 3.1 C jj/N jj0 j2 1=2 1=2 AN;j0 ."; /Dm 4jj0 j2 Id : (5.8.16) Id Dm 10 Indeed, by (5.2.12), (5.2.18), and (5.2.19), the eigenvalues `;j of AN;j0 ."; / satisfy ˙ 1=2 1=2 C O Dm TN;j0 ."; /Dm `;j D ı`;j 0 (5.8.17) ˙ where ı`;j WD hj im hj im ˙ ! ` ˙ : N , for N Since j`j N and jj j jj0 j N we see that, for jj0 j 3.1 C jj/N NN .V; d; jSj/ large enough, 2jj0 j2 ˙ 3jj0 j2 : (5.8.18) ı`;j 9 Hence (5.8.17), (5.8.18), and (5.2.13) imply (5.8.16). As a consequence, Lemma 1=2 1=2 AN;j0 ."; /Dm , ˛ D 5.8.2-ii) applied to the matrix in (5.8.15) with Z D Dm 2 2 1=2 1=2 1 2N , ˇ1 D jj0 j =10, ˇ2 4jj0 j , W D Dm YN;j0 Dm , kW k0 C , imply that 0 BN .j0 I ; 1=2/
d CjSj 4.2N C1/ [
Iq
with
jIq j CN :
qD1
Dividing these intervals further we obtain (5.8.14).
158
5 Multiscale Analysis
We now consider the case jj0 j 3.1 C jj/N N . We can no longer argue directly as in Lemma 5.8.8. In this case the aim is to bound the measure of n o 1=2 1 0 1=2 B2;N .j0 I / WD 2 RW Dm AN;j0 ."; ; /Dm > N =4 : (5.8.19) 0 The continuity property of the eigenvalues (Lemma 5.8.3) allows us then to de0 0 rive a complexity estimate for BN .j0 I ; 1=2/ in terms of the measure jB2;N .j0 I /j (Lemma 5.8.10). Lemma 5.8.11 is devoted to the estimate of the bidimensional Lebesgue measure ˇn oˇ ˇ ˇ 0 z RW 2 B2;N .j0 I / ˇ : ˇ .; / 2 ƒ Such an estimate is then used in Lemma 5.8.12 to justify that the measure of the 0 section jB2;N .j0 I /j has an appropriate bound for “most” (by a Fubini-type argument). 0 We first show that, for jj0 j 3.1 C jj/N N , the set B2;N .j0 I / is contained in an interval of size O.N / centered at the origin. z we have Lemma 5.8.9. 8jj0 j < 3.1 C jj/N N , 8 2 ƒ, 0 .j0 I / IN WD 5 1 C jj N N; 5 1 C jj N N : B2;N 1=2 1=2 AN;j0 ."; ; /Dm satisfy Proof. The eigenvalues `;j . / of Dm ˙ `;j . / D ı`;j . / C O.jT jC;s1 / ˙ where ı`;j WD hj im hj im ˙ .! ` C / ˙ / :
(5.8.20)
If j j 5.1 C jj/N N then, also using (5.2.13), each eigenvalue satisfies j`;j . /j 0 .j0 I / defined in 1, and therefore belongs to the complementary of the set B2;N (5.8.19). z we have Lemma 5.8.10. 8jj0 j 3.1 C jj/N N , 8 2 ƒ, [ 0 BN .j0 I ; 1=2/ Iq qD1;:::;ŒCO MN C1 0 where Iq are intervals with jIq j N and M WD jB2;N .j0 I /j. 0 0 .j0 I ; 1=2/, where BN .j0 I ; / is defined in (5.5.4). Proof. Suppose that 2 BN 1=2 1=2 Then there exists an eigenvalue of Dm AN;j0 ."; ; /Dm with modulus less than 2N . Now, by (5.8.15), and since jj0 j 3.1 C jj/N N , we have 1=2 1=2 1=2 1=2 YN;j0 Dm Dm AN;j0 ."; ; C / AN;j0 ."; ; / Dm D j jDm 0 0 j j 5 1 C jj N N:
5.8 Measure estimates
159
0 Hence, by Lemma 5.8.3, if 5.1 C jj/N N j j N then C 2 B2;N .j0 I / because AN;j0 ."; ; C / has an eigenvalue with modulus less than 4N . Hence i h 0 .j0 I / : cN . C1/ ; C cN . C1/ B2;N 0 .j0 I ; 1=2/ is included in an union of intervals Jm with disjoint interiTherefore BN ors, [ 0 0 BN .j0 I ; 1=2/ Jm B2;N .j0 I /; with length jJm j 2cN . C1/ m
(5.8.21) (if some of the intervals Œ cN . C1/ ; C cN . C1/ overlap, then we glue them together). We decompose each Jm as a union of (nonoverlapping) intervals Iq of length between cN . C1/ =2 and cN . C1/ . Then, by (5.8.21), we get a new covering [ 0 0 BN .j0 I ; 1=2/ Iq B2;N .j0 I / qD1;:::;Q
with
cN
. C1/
=2 jIq j cN . C1/ N
and, since the intervals Iq do not overlap, QcN . C1/ =2
Q X
ˇ 0 ˇ jIq j ˇB2;N .j0 I /ˇ DW M :
qD1
As a consequence, Q CO M N C1 , proving the lemma.
In the next lemma we use the crucial sign condition assumption 3 of Definition 5.1.2. Lemma 5.8.11. 8jj0 j < 3.1 C jj/N N , the set B02;N .j0 / WD B02;N .j0 I "/ o n 1=2 1 1=2 z RW Dm AN;j0 ."; ; /Dm > N =4 (5.8.22) WD .; / 2 ƒ 0 has measure
ˇ ˇ 0 ˇB .j0 /ˇ C "2 N Cd CjSjC1 : 2;N
(5.8.23)
z IN . In order to estimate the “bad” Proof. By Lemma 5.8.9, the set B02;N .j0 / ƒ 1=2 1=2 .; / for which at least one eigenvalue of Dm AN;j0 ."; ; /Dm has modulus less than 4N , we introduce the variable # WD
1 C "2
where # 2 2IN ;
(5.8.24)
160
5 Multiscale Analysis
and we consider the self-adjoint matrix (recall that ! D .1 C "2 /!N " ) AN;j0 ."; ; / 1=2 Dm 1 C "2
ŒXr; ."; /N;j0 1=2 1=2 D Dm J !N " @' C : C #YN;j0 Dm 1 C "2
1=2 PN;j0 ./ WD Dm
(5.8.25)
By the assumption 3 of Definition 5.1.2 we get d PN;j0 ./ c"2 ;
p c WD c2 m :
z such that at least one eigenvalue By Lemma 5.8.2-i), for each fixed #, the set of 2 ƒ 2 Cd CjSj /. Then, integrating on # 2 IN , is 4N has measure at most O." N whose length is jIN j D O.N /, we deduce (5.8.23). 0 As a consequence of Lemma 5.8.11 for “most” the measure of B2;N .j0 I / is small.
Lemma 5.8.12. 8jj0 j < 3.1 C jj/N N , the set o n ˇ 0 ˇ z ˇB2;N .j0 I /ˇ "2 CO 1 N C2d CjSjC3 FN .j0 / WD 2 ƒW
(5.8.26)
where CO is the positive constant of Lemma 5.8.10, has measure jFN .j0 /j CN d 2 : Proof. By Fubini theorem, recalling (5.8.22) and (5.8.19), we have Z ˇ ˇ ˇ 0 ˇ 0 ˇB .j0 /ˇ D ˇB .j0 I /ˇ d : 2;N 2;N z ƒ
(5.8.27)
(5.8.28)
Let WD 2d jSj 3. By (5.8.28) and (5.8.23), Z 2 Cd CjSjC1 0 jB2;N .j0 I /j d C" N z ƒ ˇn oˇ ˇ ˇ 0 z jB2;N "2 CO 1 N ˇ 2 ƒW .j0 I /j "2 CO 1 N ˇ WD "2 CO 1 N jFN .j0 /j
whence (5.8.27). As a corollary we get
N , Lemma 5.8.13. Let N N1 WD ŒN0 N , see (5.5.11). Then 8jj0 j < 3.1 C jj/N 8 … FN .j0 /, we have [ 0 .j0 I ; 1=2/ Iq (5.8.29) BN qD1;:::;N 2d CjSjC5
with Iq intervals satisfying jIq j N .
161
5.8 Measure estimates
Proof. By the definition of FN .j0 / in (5.8.26), for all … FN .j0 /, we have ˇ 0 ˇ ˇB .j0 I /ˇ < "2 CO 1 N C2d CjSjC3 : 2;N Then Lemma 5.8.10 implies that 8jj0 j < 3.1 C jj/N N , [ 0 .j0 I ; 1=2/ BN
Iq :
qD1;:::;"2 N 2d CjSjC4 N
For all N N1 D ŒN0 we have "2 N 2d CjSjC4
(5.5.10)
N0 Cs0 Cd N 2d CjSjC4 CN
Cs0 Cd N
C2d CjSjC4
N 2d CjSjC5
by (5.5.2). This proves (5.8.29).
Proof of Proposition 5.8.7 concluded. By Lemmata 5.8.8 and 5.8.13, for all N z N1 , 2 ƒ, [ 0 … FN .j0 / H) 2 GN; 1 2
jj0 j 11 wkLip;s .s " kwkLip;s C kIkLip;sC kwkLip;s0 :
(7.1.17)
(7.1.18) (7.1.19) (7.1.20)
(Linearized operator in the new coordinates) The linearized operator ! @' d.;;w/ XK .; 0; 0/
178
7 Linearized operator at an approximate solution
is
1 0 0 1 b ! @' b K20 ./b K> ./b w @ K10 ./Œb 11 C B @b C Œ@ K10 ./>b bA: C @ K00 ./Œb C Œ@ K01 ./> w A 7! @! @'b w b b J f@ K01 ./Œb C K11 ./b C K02 ./b wg ! @' w (7.1.21)
In order to find an approximate inverse of the linear operator in (7.1.21) it is sufficient to invert the operator 1 0 0 1 0 1 b b K> w K20 ./b ! @' b 11 ./b C B D @b (7.1.22) A; A WD D.i/ @b A WD @ ! @'b b w b w b ! @' w b J K02./b w J K11 ./ which is obtained by neglecting in (7.1.21) the terms @ K10 , @ K00 , @ K00 , @ K01 , which vanish if Z D 0 by (7.1.16). The linear operator D.i/ can be inverted in a “triangular” way. Indeed the second component in (7.1.22) for the action variable is decoupled from the others. Then, one inverts the operator in the third component, i.e., the operator L! WD L! .i/ WD …? (7.1.23) S ! @' J K02 ./ jH ? ; S
and finally the first one. The invertibility properties of L! will be obtained in Proposition 12.1.1 using the results of Chapters 8–11. We now provide the explicit expression of L! .i/. Lemma 7.1.2 (Linearized operator in the normal directions). The linear operator K02 ./ WD K02 .iI / has the form
K02 ./ D DV C "2 B./ C r" ./ ;
(7.1.24)
where B WD B."; / is the self-adjoint operator
1=2 ? 1=2 2 3 Q … D C "4a .x/.v.; 0; // /D Q 3a.x/.v.; 0; // 4 S V V WD B P 0 (7.1.25) with the function v defined in (3.2.9), the functions a.x/ and a4 .x/ are in (3.1.5), the vector D ./ 2 Œ1; 2jSj in (1.2.27), and r" WD r" .I/ is a self-adjoint remainder satisfying jr" jLip;C;s1 .s1 "4 ; jr" jLip;C;s .s "2 "2 C kIkLip;sC2 :
(7.1.26) (7.1.27)
Moreover, given another torus i 0 D .'; 0; 0/ C I0 satisfying (7.1.4), we have jr" r0" jC;s1 .s1 "2 kI I0 ks1 C2 :
(7.1.28)
7.2 Proof of Proposition 7.1.1
179
Chapters 8–11 will be devoted to obtaining an approximate right inverse of L! .i/, as stated in Proposition 12.1.1. In Chapter 8 we shall conjugate L! .i/ to an operator (see (8.3.4) and (8.3.5) and (12.1.8)), which is in a suitable form to apply Proposition 9.2.1, and so proving Proposition 12.1.1. Proposition 9.2.1 is proved in Chapters 10 and 11. The rest of this chapter is devoted to the proof of Proposition 7.1.1 and Lemma 7.1.2.
7.2 Proof of Proposition 7.1.1 By (1.2.29), for all 2 ƒ the frequency vector ! D .1 C "2 /!N " satisfies the Diophantine condition j! `j
2 ; h`i1
8` 2 ZjSj n f0g;
where 2 D 1 =2 D 0 =4 :
(7.2.1)
We recall that the constant 0 in (1.2.6) depends only on the potential V .x/, and it is considered as a fixed O.1/ quantity, and thus we shall not track its dependence in the estimates. An invariant torus i for the Hamiltonian vector field XK , supporting a Diophantine flow, is isotropic (see, e.g., Lemma 1 in [25] and Lemma C.1.2), namely the pull-back 1-form i ~ is closed, where ~ is the Liouville 1-form defined in (3.2.15). This is equivalent to say that the 2-form i W D i d~ D d.i ~/ D 0 vanishes, where W D d~ is defined in (3.2.10). Given an “approximately invariant” torus i , the 1-form i ~ is only “approximately closed”. In order to make this statement quantitative we consider X i ~ D ak .'/d'k ; kD1;:::;jSj
1 ak .'/ WD Œ@' .'/> y.'/ k C .@'k z.'/; J z.'//L2.Tx / 2
(7.2.2)
and we quantify the size of the pull-back 2-form X i W D d i ~ D Akj .'/d'k ^ d'j ; k;j D1;:::;jSj;k ;
mj ./ WD .@j zQ /..// D .@' /> ./rz./ j :
The tame estimate (7.2.7) implies kM kLip;s .s 1 C kIkLip;sC1
(7.2.8)
182
7 Linearized operator at an approximate solution
and using (4.5.1) and (7.1.4) we have kmj kLip;s D .@j zQ /..//Lip;s .s kzkLip;sC1 C kzkLip;s0 C1 k#kLip;sC1 .s kIkLip;sC1 :
(7.2.9)
Now 0 1 0 1 b b a.'/ .'/ {.'/ WD DGı .'; 0; 0/ @b DGı .'; 0; 0/Œb b.'/A ; .'/ A D @b b c.'/ w b.'/ where ; b a WD @' .'/Œb b b WD @' yı .'/Œb C M.'/Œb
jSj X j D1
.mj .'/; J b w/L2x e j ;
(7.2.10)
C w b b c WD @' z.'/Œb and 0 1 b ˛ .'/ { 1 .'/;b { 2 .'/ D @b D 2 Gı .'; 0; 0/Œb ˇ.'/A ; b .'/ where 1; b 2 ; b ˛ WD @2' Œb b ˇ WD @2' yı Œb 1; b 2 C @' M Œb 1 b 2 C @' M Œb 2 b 1
jSj X
1 ; J b w2 @' mj Œb
j D1
L2 x
C @' mj Œb 2 ; J b w 1 L2 e j ;
(7.2.11)
x
1; b 2 : b WD @2' zŒb The tame estimates (7.1.11) and (7.1.12) are a consequence of (7.2.10), (7.2.11), (4.5.1), (7.1.4), (7.2.6), (7.2.8), and (7.2.9). Then we consider the Taylor expansion (7.1.14) of the Hamiltonian K at the trivial torus .; 0; 0/. Notice that the Taylor coefficient K00 ./ 2 R, K10 ./ 2 RjSj , K01 ./ 2 HS? , K20 ./ is a jSj jSj real matrix, K02 ./ is a linear self-adjoint operator of HS? and K11 ./ 2 L.RjSj ; HS? /.
7.2 Proof of Proposition 7.1.1
The Hamilton equations associated to (7.1.14) are 8 > P ˆ C @ K3 .; ; w/ ˆ ˆ D K10 ./ C K20 ./ C K11 ./w ˆ > ˆ P D @ K00 ./ Œ@ K10 ./ Œ@ K01 ./> w ˆ ˆ ˆ ˆ 1 < @ K20 ./ C .K11 ./; w/L2 .Tx / 2
ˆ ˆ 1 ˆ ˆ ˆ C .K02 ./w; w/L2 .Tx / C K3 .; ; w/ ˆ ˆ 2 ˆ ˆ :wP D J K ./ C K ./ C K ./w C r K .; ; w/ 01 11 02 w 3
183
(7.2.12)
where @ K> 10 is the jSj jSj transposed matrix and @ K> 01 ./;
? jSj K> 11 ./W HS ! R ;
8 2 TjSj ;
are defined by the duality relation ; w/L2x D b Œ@ K01 > w; .@ K01 Œb
8b 2 RjSj ; w 2 HS? ;
(7.2.13)
where denotes the usual scalar product in RjSj . The transposed operator K> 11 is similarly defined and it turns out to be the following operator: for all w 2 HS? , and denoting e k the k-th vector of the canonical basis of RjSj , X K> K> 11 ./w D 11 ./w e k e k kD1;:::;jSj
D
X
w; K11 ./ek L2 .Tx / e k 2 RjSj :
(7.2.14)
kD1;:::;jSj
The terms @ K00 , K10 !, K01 in the Taylor expansion (7.1.14) vanish if Z D 0. Lemma 7.2.4. (7.1.15) and (7.1.16) hold. Proof. Formula (7.1.15) is proved in Lemma 8 of [25] (see Lemma C.2.6). Then (7.1.4), (7.1.6), (7.1.7), and (7.1.11) imply (7.1.16). Notice that, if F .i/ D 0, namely i.'/ is an invariant torus for the Hamiltonian vector field XK , supporting quasiperiodic solutions with frequency !, then, by (7.1.15) and (7.1.16), the Hamiltonian in (7.1.14) simplifies to the KAM (variable coefficients) normal form 1 K D const C ! C K20 ./ C K11 ./; w L2 .T/ 2 1 C K02 ./w; w L2 .T/ C K3 : 2 We now estimate K20 ; K11 in (7.1.14).
(7.2.15)
184
7 Linearized operator at an approximate solution
Lemma 7.2.5. (7.1.17)–(7.1.20) hold. Proof of (7.1.17) and (7.1.18). By Lemma 9 of [25] (see Lemma C.2.7) and the form of K in (3.2.7), we have
K20 ./ D Œ@ ./1 @yy K.iı .//Œ@' ./> D "2 Œ@ ./1 @yy R.iı .//Œ@ ./> 2
D " @yy R.i0 .// C r20 ;
(7.2.16) (7.2.17)
where i0 ./ D .; 0; 0/ and r20 WD "2 Œ@ ./1 @yy R.iı .//Œ@ ./> @yy R.iı .// C "2 @yy R.iı .// @yy R.i0 .// : By Lemma 4.5.5, (7.1.4), and (7.2.6) we have k@yy R.iı .// @yy R.i0 .//kLip;s0 C.s1 /"2 and, using also k.@ .//1 IdjSj kLip;s0 kIkLip;s0 C1 .s1 "2 , that Œ@ ./1@yy R.iı .//Œ@ ./> @yy R.iı .// C.s1 /"2 : Lip;s 0
Therefore
kr20 kLip;s0 C.s1 /"4 :
(7.2.18)
Moreover, by (7.2.16) and Lemma 4.5.5 the norm of K20 (which is the sum of the norms of its jSj jSj matrix entries) satisfies kK20 kLip;s .s "2 .1 C kIkLip;sC / and (7.1.18) follows by the tame estimates (4.5.1) for the product of functions. Next, recalling the expression of R in (3.2.8), with G as in (3.1.7), by computations similar to those in Section 6.2, it turns out that the average with respect to of @yy R.i0 .// is h@yy R.i0 .//i D A C r
with
krkLip C "2 ;
(7.2.19)
where A is the twist matrix defined in (1.2.9) (in particular there is no contribution from a4 .x/u4 ). The estimate (7.1.17) follows by (7.2.17), (7.2.18), and (7.2.19). Proof of (7.1.19) and (7.1.20). By Lemma 9 of [25] (see Lemma C.2.7) and the form of K in (3.2.7) we have
K11 ./ D @y rz K.iı .//Œ@' ./> C J.@ zQ /..//.@yy K/.iı .//Œ@' ./> D "2 @y rz R.iı .//Œ@' ./>C "2 J.@ zQ /..//.@yy R/.iı .//Œ@' ./> ; (7.2.20)
185
7.3 Proof of Lemma 7.1.2
and using (7.1.4) and (7.2.6), we deduce (7.1.19). The bound (7.1.20) for K> 11 follows by (7.2.14) and (7.1.19). Finally formula (7.1.21) is obtained by linearizing (7.2.12).
7.3 Proof of Lemma 7.1.2 1 We have to compute the quadratic term .K02 ./w; w/L2 .Tx / in the Taylor expansion 2 (7.1.14) of the Hamiltonian K.; 0; w/. The operator K02 ./ is
K02 ./ D @w rw K.; 0; 0/ D @w rw .K ı Gı /.; 0; 0/ D DV C "2 @w rw .R ı Gı /.; 0; 0/ ;
(7.3.1)
where the Hamiltonian K is defined in (3.2.7) and Gı is the symplectic diffeomorphism (7.1.9). Differentiating with respect to w the Hamiltonian .R ı Gı /.; ; w/ D R../; yı ./ C L1 ./ C L2 ./w; z./ C w/ where, for brevity, we set L1 ./ WD Œ@ ./T ;
L2 ./ WD Œ@ zQ ..//> J;
zQ . / D z 1 . / ; (7.3.2)
(see (7.1.9)), we get rw .R ı Gı /.; ; w/ D L2 ./> @y R.Gı .; ; w// C rz R.Gı .; ; w// : Differentiating such an identity with respect to w, and recalling (7.1.10), we get @w rw .R ı Gı /.; 0; 0/ D @z rz R.iı .// C r./
(7.3.3)
with a self-adjoint remainder r./ WD r1 ./ C r2 ./ C r3 ./ given by r1 ./ WD L2 ./> @yy R.iı .//L2 ./ ; r2 ./ WD L2 ./> rz @y R.iı .// ; r3 ./ WD @y rz R.iı .//L2 ./ :
(7.3.4)
Each operator r1 ; r2 ; r3 is the composition of at least one operator with “finite rank RjSj in the space variable”, and therefore it has the “finite-dimensional” form X .l/ .l/ rl ./Œh D h ; gj .; / L2 j .; / ; 8h 2 HS? ; l D 1; 2; 3 ; (7.3.5) j D1;:::;jSj
x
186
7 Linearized operator at an approximate solution
for functions gj .; /; j .; / 2 HS? . Indeed, writing the operator L2 ./W HS? ! RjSj as X h ; L2 ./> Œe j L2 e j ; 8h 2 HS? ; L2 ./Œh D .l/
.l/
x
j D1;:::;jSj
we get, by (7.3.4), X r1 ./Œh D
h ; L2 ./> Œe j
L2 x
j D1;:::;jSj
r2 ./Œh D
X
j D1;:::;jSj
r3 ./Œh D
X
h; A> 2 Œe j
L2 x
(7.3.6) L2 ./> Œej ;
h; L2 ./> Œe j
j D1;:::;jSj
A1 WD L2 ./> @yy R.iı .// ;
A1 Œe j ;
L2 x
A3 Œe j ;
A2 WD @z @y R.iı .// ; (7.3.7) A3 WD @y rz R.iı .// D A> 2 : (7.3.8)
Lemma 7.3.1. For all l D 1; 2; 3, j D 1; : : : ; jSj, we have, for all s s0 , .l/ g C .l/ .s 1 C kIkLip;sC1 j j Lip;s Lip;s n o min gj.l/ Lip;s ; .l/ .s kIkLip;sC1 : j Lip;s
(7.3.9)
Proof. Recalling the expression of L2 in (7.3.2) and (7.3.6)–(7.3.8) we have: > Q /..//. Therefore by i) gj.1/ ./ D gj.3/ ./ D .2/ j ./ D L2 ./ Œe j D J.@j z (7.2.9), .3/ .2/ .1/ g D gj Lip;s D j Lip;s .s kIkLip;sC1 : j Lip;s
ii) .1/ D L2 ./> @yy R.iı .//Œej . Then, recalling (7.3.2) and using Lemma j 4.5.5, we have .1/ j
Lip;s
.s
@ zQ ..// .1 C kIı kLip;s0 / Lip;s kIı kLip;s C @ zQ ..// Lip;s0
(7.1.4)(7.2.9)(7.2.6)
.s
kIkLip;sC1 :
.2/ iii) .3/ j D gj D @yj rz R.iı ..//. Then, using Lemma 4.5.5 and (7.2.6), we get .2/ .3/ D gj Lip;s .s 1 C kIı kLip;s .s 1 C kIkLip;sC1 : j Lip;s
Items i)–iii) imply the estimates (7.3.9).
7.3 Proof of Lemma 7.1.2
187
We now use Lemmata 4.3.7 and 7.3.1 to derive bounds on the decay norms of the remainders rl defined in (7.3.5). Recalling Definition 4.3.4, we have to estimate 1=2 1=2 rl …? the norm jrl jLip;C;s D jDm S Dm jLip;s of the extended operators, acting on the 0 2 d jSj 2 whole H D L .T T I C /, defined by X .l/ 1=2 1=2 1=2 1=2 .l/ rl …? …? D Dm S Dm h WD S Dm h ; gj m j 2 j D1;:::;jSj
X
D
1=2 .l/ h ; Dm gj
j D1;:::;jSj
Lx
L2 x
1=2 .l/ Dm j ;
h 2 H0 ; (7.3.10)
where we used that gj 2 HS? . Then the decay norm of r D r1 C r2 C r3 satisfies .l/
jrjLip;C;s
.
1=2 1=2 max jDm rl …? S Dm jLip;s
lD1;2;3
(7.3.10);(4.3.29)
.s
max
lD1;2;3;j D1;:::;jSj
1=2 .l/ 1=2 .l/ D g D m m j j Lip;s Lip;s
1=2 .l/ 1=2 .l/ gj Lip;s Dm j Lip;s C Dm
0
(7.3.11)
0
(7.3.9);(7.1.4)
.s
kIkLip;sC2 :
(7.3.12)
Finally, by (7.3.1), (7.3.3) we have
K02 ./ D DV C "2 @z rz R.iı .// C "2 r./ D DV C "2 @z rz R.; 0; 0/ C "2 @z rz R.iı .// @z rz R.; 0; 0/ C "2 r./ D DV C "2 B C r" ;
(7.3.13)
where B is defined in (7.1.25) and r" WD "2 @z rz R.; 0; 0/ "2 B C "2 @z rz R.iı .// @z rz R.; 0; 0/ C "2 r./ :
(7.3.14)
This is formula (7.1.24). We now prove that r" satisfies (7.1.27). In (7.3.12) we have already estimated jrjLip;C;s . Recalling Definition 4.3.4, and the expression of R in (3.2.8) (see also (3.2.14)) we have to estimate the decay norm of the extended operator, acting on the whole H0 D L2 .Td TjSj I C2 /, defined as h1 @z rz R.iı .//…? S h2 ! 1 21 1=2 ? ? 2 D .@ g/."; x; v../; y ./; / C D Q.//D … h … u ı S S 1 V V V ; WD 0 (7.3.15)
188
7 Linearized operator at an approximate solution
where g."; x; u/ D @u G."; x; u/ is the nonlinearity in (1.2.2). Hence, by Proposition 4.4.6 and Lemma 4.3.8, we have j@z rz R.iı .// @z rz R.; 0; 0/jLip;C;s .s .@u g/."; x; v../; yı ./; / C D 1=2 Q.// .@u g/."; x; v.; 0; // V
Lemma 4.5.5;(7.1.4)
.s
Lip;s
(7.2.6)
kIı kLip;s .s kIkLip;sC1 :
(7.3.16)
Moreover, recalling (3.1.6) and the definition of B in (7.1.25), we have j@z rz R.; 0; 0/ BjLip;C;s .s "2 :
(7.3.17)
In conclusion, the operator r" defined in (7.3.14) satisfies, by (7.3.17), (7.3.16), and (7.3.12), the estimate (7.1.27). In particular, (7.1.26) holds by (7.1.4). There remains to prove (7.1.28). Let i 0 D .'; 0; 0/ C I0 be another torus embedding satisfying (7.1.4) and let r0" be the associated remainder in (7.3.14). We have r0" ./ r" ./ D "2 @z rz R.iı0 .// @z rz R.iı .// C "2 .r 0 ./ r.// : By Lemma 4.5.5 and the expression (7.3.15) of @z rz R, ˇ ˇ ˇ@z rz R i 0 ./ @z rz R iı ./ ˇ ı
(7.1.8)
C;s1
.s1 kI0ı Iı ks1 .s1 kI0 Iks1 C1 : (7.3.18)
Moreover let g 0 j and 0 j be the functions obtained in (7.3.5)–(7.3.8) from i 0 . Using again Lemma 4.5.5 as well as (7.1.8) and (7.2.6), we obtain, for any j D 1; 2; 3, j D 1; : : : ; jSj, .l/
.l/
0 .l/ g g .l/ C 0 .l/ .l/ .s kI0 IksC1 C kIksC1 kI0 Iks C1 : j j 0 j j s s Hence, by Lemma 4.3.7, jr 0 rjC;s1 .s1 kI0 Iks1 C2 . With (7.3.18), this gives (7.1.28).
Chapter 8
Splitting of low-high normal subspaces up to O."4 / The main result of this chapter is Proposition 8.3.1. Its goal is to transform the linear operator L! in (7.1.23), into a form (see (8.3.5)) suitable to apply Proposition 9.2.1 in the next chapter, which will enable us to prove the existence of an approximate right inverse of L! for most values of the parameter . ? In the next section we fix the set M in the splitting HS? D HM ˚ HM .
8.1 Choice of M First recall that, by Lemma 7.1.2, the linear operator L! defined in (7.1.23), acting in the normal subspace HS? , has the form L! D ! @' J DV C "2 B C r" ; ! D 1 C "2 !N " ; (8.1.1) where B D B."; / is the self-adjoint operator in (7.1.25) and the self-adjoint remainder r" satisfies (7.1.26). Recalling (7.1.25), (3.2.9), and (1.2.27), we derive, for ka4 kL1 1, the following bounds for the L2 operatorial norm of B: (8.1.2) kBk0 C kakL1 C " ; kBklip;0 C kakL1 C " kA 1 k ; where kA 1 k is some norm of the inverse twist matrix A 1 . Dividing (8.1.1) by 1 C "2 we now consider the operator
L! r" D !N " @' J A C ; 1 C "2 1 C "2 DV "2 B A WD A./ D C : 1 C "2 1 C "2
(8.1.3)
We denote by % the self-adjoint operator % WD
"2 B 1 C "2
? , and taking in HM the basis and, according to the splitting HS? D HM ˚ HM ˚ ‰j .x/; 0 ; 0; ‰j .x/ j 2M ;
(8.1.4)
190
8 Splitting of low-high normal subspaces up to O."4 /
we represent A as (recall that DV ‰j D j ‰j and the notation (4.2.1)) 0 1 j M c Diagj 2M 0 Id 2 2 %M %M B C 1 C " M AD@ DV A C %Mc %Mcc : M M 0 1 C "2
(8.1.5)
Recalling (7.1.25), since the functions a; a4 2 C 1 .Td /, using (3.2.9), (1.2.27), and Proposition 4.4.6, we derive that the operator % defined in (8.1.4) satisfies j%jLip;C;s C.s/"2 :
(8.1.6) c
In the next lemma, we fix the subset of indices M. We recall the notation AM Mc D …HMc AjHMc D …H ? AjH ? . M
M
Lemma 8.1.1 (Choice of M). Let j ."; / WD ŒBA 1 !N " j C
j ŒBA 1 N j ; 2 1C"
j 2 M;
(8.1.7)
where A , B are the “Birkhoff” matrices defined in (1.2.9) and (1.2.10), the j are the unperturbed frequencies defined in (1.1.5), and the vectors , N !N " are defined respectively in (1.2.3) and (1.2.25). There is a constant C > 0 such that, if M N contains the subset F defined in (1.2.15), i.e., F M, and minc j C 1 C "2 kakL1 C kakL1 kA 1 k C "kA 1 k ; (8.1.8) j 2M
then
c 2 max j@ j C " @ AM Id : c j M j 2F
(8.1.9)
Proof. Differentiating the expression of A in (8.1.3) we get @ A WD
"4 B "2 DV "2 @ B C : .1 C "2 /2 1 C "2 .1 C "2 /2 c
Then by (8.1.2) and the fact that ŒDV M Mc minc j , we get j 2M
"2 c 4 2 1 @ AM c 2 minc j C C " kakL1 C " C C " kakL1 C " kA k : M 2 j 2M 1C" (8.1.10) Now, recalling (8.1.7), we have @ j ."; / D
"2 1 Œ BA N j j .1 C "2 /2
(8.1.11)
8.2 Homological equations
191
and so (8.1.10) and (8.1.11) imply c
ˇ ˇ "2 ˇj ŒBA 1 ˇ min max N j j j 2F .1 C "2 /2 j 2Mc C C "4 .kakL1 C "/ C C "2 .kakL1 C "/kA 1 k :
@ AM Mc C max j@ j j j 2F
Thus (8.1.9) holds taking M large enough such that (recall that j ! C1) ˇ ˇ N j ˇ CC 1C"2 kakL1 CkakL1 kA 1 kC"kA 1 k : minc j max ˇj ŒBA 1 j 2M
j 2F
By (1.2.15) and (1.2.13), the fact that j
(8.1.12)
p
ˇ (see (1.1.5)), we get n o ˇ ˇ N j ˇ g D maxc BA 1 N j j max ˇj ŒBA 1 j 2F j 2S C kakL1 kA 1 k C 1
(8.1.13)
j
for some C WD C.ˇ; S/, having used that by (1.2.10), we have sup jGk j C kakL1 ˇ. k2S
By (8.1.12), (8.1.13), and (8.1.8) we deduce (8.1.9).
In the sequel of the monograph the subset M is kept fixed. Note that the condition (8.1.8) can be fulfilled taking M large enough because j ! C1 as j ! C1. In the next part of the chapter we perform one step of averaging to eliminate, as Mc M much as possible, the terms of order O."2 / of %M M , %M , %Mc in (8.1.5).
8.2 Homological equations ? According to the splitting H D HM ˚ HM we consider the linear map
S 7! J N @' S C ŒJ S; JDV ; where N 2 RjSj is the unperturbed tangential frequency vector defined in (1.2.3), and
d.'/ a.'/ 2 L HS? ; 8' 2 TjSj ; S.'/ D a.'/ 0 (8.2.1) ? d.'/ D d .'/ 2 L HM ; a.'/ 2 L HM ; HM is self-adjoint. Since DV and J commute, we have J N @' S C ŒJ S; JDV J N @' d C DV d C J dJDV D J N @' a C DV a C J aJDV
J N @' a C DV a C J a JDV 0
:
(8.2.2)
192
8 Splitting of low-high normal subspaces up to O."4 /
Recalling the definition of …D in (4.2.29) (with F M) and of …O in (4.2.31) we decompose the self-adjoint operator % D "2 B.1 C "2 /1 defined in (8.1.4) as % D …D % C …O % :
(8.2.3)
%1 .'/ %2 .'/ 2 L HS? ; …O %.'/ D %2 .'/ 0 ? %1 .'/ 2 L HM ; %2 .'/ 2 L HM ; HM ;
(8.2.4)
The term …O % has the form
where %1 .'/ D %1 .'/, and, recalling (4.2.30) and (4.2.18), the '-average Z 1 j j .%b1 /j .'/ d' 2 M ; 8 j 2 M : Œ%b1 j .0/ D jSj jSj .2/ T
(8.2.5)
The aim is to solve the “homological” equation J N @' S C ŒJ S; JDV D J …O % ;
(8.2.6)
which, recalling (8.2.2) and (8.2.4), amounts to solving the decoupled pair of equations J N @' d C DV d C J dJDV D J%1 J N @' a C DV a C J aJDV D J%2 :
(8.2.7) (8.2.8)
Note that taking the adjoint equation of (8.2.8), multiplying by J on the left and the right, and since J and DV commute, we also obtain the equation in top right in (8.2.6) (see (8.2.2) and (8.2.4)). The arguments of this section are similar to those developed in Section 10.3, actually simpler because the equation (8.2.8) has constant coefficients in ', unlike the corresponding equation (10.2.6), where the operator V0 depends on ' 2 TjSj . Thus in the sequel we shall often refer to Section 10.3. We first find a solution d of the equation (8.2.7). We recall that a linear operator d.'/ 2 L.HM / is represented by a finite-dimensional square matrix .dji .'//i;j 2M j with entries di .'/ 2 L.Hj ; Hi / ' Mat2 .R/. To solve (8.2.7) we use the secondorder Melnikov nonresonance conditions (1.2.16)–(1.2.19) that depend just on the unperturbed linear frequencies defined by (1.1.5). Lemma 8.2.1 (Homological equation (8.2.7)). Assume the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19). Then the equation (8.2.7) has a solution d.'/ D .dji .'//i;j 2M, d.'/ D d .'/, satisfying j d i
Lip;H s .TjSj /
j C .%1 /i Lip;H sC20 .TjSj / ;
8i; j 2 M :
(8.2.9)
193
8.2 Homological equations
Proof. Since the symplectic operator J leaves invariant each subspace Hj and recalling (3.1.10), the equation (8.2.7) is equivalent to j
j
j
j
J N @' di .'/ C i di .'/ C j J di .'/J D J.%1 .'//i ;
0 1 8i; j 2 M; J D ; 1 0 and, by a Fourier series expansion with respect to the variable ' 2 TjSj writing X j j j j j b d i .`/ei`' ; b d i .`/ D b di .'/ D d i .`/ 2 Mat2 .C/ ; b d i .`/ ; `2ZjSj
(8.2.7) is equivalent to j di .`/ C ib dji .`/ C j J b dji .`/J D J Œ%b1 ji .`/; i.N `/J b
8i; j 2 M; ` 2 ZjSj : (8.2.10) Using the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19), and j since, by (8.2.5), the Fourier coefficients Œ%b1 j .0/ 2 M , the equations (8.2.10) can be solved arguing as in Lemma 10.3.2 and the estimate (8.2.9) follows by standard arguments. ? We now solve the equation (8.2.8) in the unknown a 2 L.HM ; HM /. We use again the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19).
Lemma 8.2.2 (Homological equation (8.2.8)). Assume the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19). Then the homological equation (8.2.8) ? // satisfying has a solution a 2 L2 .TjSj ; L.HM ; HM jajLip;s C.s/k%2 kLip;sC20 :
(8.2.11)
? Proof. Writing aj .'/ WD a.'/jHj , %2 .'/ WD .%2 .'//jHj 2 L.Hj ; HM / and recalling (3.1.10), the equation (8.2.8) amounts to j
J N @' aj .'/ C DV aj .'/ C j J aj .'/J D J%j2 .'/;
8j 2 M :
(8.2.12)
Writing, by a Fourier series expansion with respect to ' 2 TjSj , Z X 1 aj .'/ D b aj .`/ei`' ; b aj .`/ D aj .'/ei`' d' ; jSj jSj .2/ T `2ZjSj X j j Jb %2 .`/ei`' ; %2 .'/ D `2ZjSj
the equation (8.2.12) amounts to j
aj .`/ C DV b aj .`/ C j Jb aj .`/J D Jb %2 .`/; iN `Jb
8j 2 M; ` 2 ZjSj : (8.2.13)
194
8 Splitting of low-high normal subspaces up to O."4 /
? According to the L2 orthogonal splitting HM D ˚j 2Mc Hj the linear operator b aj .`/, ? which maps Hj into the complexification of HM and satisfies b aj .`/ D b aj .`/, is c identified (as in (4.2.11) and (4.2.12) with index k 2 M ) with a sequence of 2 2 matrices b ajk .`/ ; b ajk .`/ 2 Mat2 .C/; b ajk .`/ D b ajk .`/ : (8.2.14) c k2M
Similarly b %j2 .`/ .Œb %2 jk .`//k2Mc . Thus (8.2.13) amounts to the following sequence of equations j
j
j
j
ak .`/ C kb ak .`/ C j Jb ak .`/J D J Œb %2 .`/ ; iN `Jb
k 0 1 : j 2 M; k 2 Mc ; ` 2 ZjSj ; J D 1 0
(8.2.15)
Note that the equation (8.2.15) is like (8.2.10). Since j ¤ k (indeed j 2 M, k 2 Mc ), by the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19), each equation (8.2.15) has a unique solution for any %2 , the reality condition (8.2.14) holds, and j b ak .`/ C h`i0 Œb (8.2.16) %2 jk .`/ : We now estimate jajs . By Lemma 4.3.10 we have jajs 's kaks (we identify each ? ? ? aj 2 L.Hj ; HM / with a function of HM HM as in (4.2.6) and (4.2.7)). Given a function X ? u`;k ei`' ‰k .x/ 2 HM u.'; x/ D `2ZjSj ;k2Mc
the Sobolev norm kuks defined in (4.3.2) is equivalent, using (4.1.4) and (4.1.3), to kuk2s 's kuk2L2 .H s \H ? / C kuk2H s .L2 \H ? / ' x ' x M M X 2s 2s 2 's k C h`i ju`;k j :
(8.2.17)
`2ZjSj ;k2Mc
In conclusion (8.2.17), (8.2.16), and the fact that by the Young inequality 2sq h`i20 p C k .s;0 h`i2.0 Cs/ C k2.0 Cs/ ; p q 2. 0 C s/ 2. 0 C s/ ; q WD p WD ; 2 0 2s
h`i20 2s k
imply jajs C.s/k%2 ksC0 . The estimate (8.2.11) for the Lipschitz norm follows as usual.
8.3 Averaging step
195
8.3 Averaging step We consider the family of invertible symplectic transformations
P.'/ WD eJ S.'/ ;
P1 .'/ WD eJ S.'/ ;
' 2 TjSj ;
(8.3.1)
where S WD S.'/ is the self-adjoint operator in L.HS? / of the form (8.2.1) with d.'/, a.'/ defined in Lemmata 8.2.1 and 8.2.2. By Lemma 4.3.10, the estimates (8.2.9), (8.2.11), and (8.1.6) imply jSjLip;C;s .s jSjLip;sC 1 .s k%kLip;sC20 C1 C.s/"2 2
(8.3.2)
and the transformation P.'/ in (8.3.1) satisfies, for " small, the estimates jPjLip;s1 2;
jPjLip;s ; jP1 jLip;s C.s/;
8s s1 :
(8.3.3)
In the next proposition, which main result of this chapter, we conjugate the is the r" whole operator !N " @' J A C 1C" defined in (8.1.3) by P.'/. 2 Proposition 8.3.1 (Averaging). Assume the second-order Melnikov nonresonance conditions (1.2.16)–(1.2.19), where the set M is fixed in Lemma 8.1.1. Let P.'/ be the symplectic transformation (8.3.1) of HS? , where S.'/ is the self-adjoint operator of the form (8.2.1) with d.'/, a.'/ defined in Lemmata 8.2.1 and 8.2.2. Then the conjugated operator
r" P1 .'/ !N " @' J A C P.'/ D !N " @' J AC (8.3.4) 1 C "2 acting in the normal subspace HS? , has the following form, with respect to the split? , ting HS? D HM ˚ HM
DV D."; / 0 C C c ; (8.3.5) A D A0 C % ; A0 WD C …D % D AM 0 1 C "2 Mc where, in the basis f.‰j ; 0/; .0; ‰j /gj 2M of HM , the operator D."; / is represented by the diagonal matrix
1 0 D."; / D Diagj 2M j ."; /Id2 ; Id2 D (8.3.6) 0 1 with j ."; / defined in (8.1.7), A is defined in (8.1.3), and j%C jLip;C;s .s "4 C jr" jLip;C;s :
(8.3.7)
Moreover, given another self-adjoint operator r0" satisfying (7.1.26), we have that ˇ C C 0 ˇ ˇ ˇ ˇ% % ˇ ˇr" r0 ˇ . : (8.3.8) s " 1 C;s C;s 1
1
196
8 Splitting of low-high normal subspaces up to O."4 /
The rest of this chapter is dedicated to proving Proposition 8.3.1. We first study the conjugated operator
P1 .'/.!N " @' J A/P.'/ ; where A is defined in (8.1.3). We have the Lie series expansion
P1 .'/.!N " @' J A/P.'/ D .!N " @' J A/ C Ad.J S/ .X0 / C
X 1 . X0 / ; Adk kŠ .J S/
k2
(8.3.9) where X0 WD !N " @' J A. Recalling that A D .1 C "2 /1 DV C % by (8.1.3) and (8.1.4), we expand the commutator as Ad.J S/ .X0 /
D
!N " @' .J S/ C ŒJ S; J A
D
J.!N " @' S/ C .1 C "2 /1 ŒJ S; JDV C ŒJ S; J%
D
J. N @' S/ C ŒJ S; JDV "2 .1 C "2 /1 ŒJ S; JDV N @' S C ŒJ S; J% C J.!N " /
(8.2.6);(1.2.25)
D
J …O % "2 .1 C "2 /1 ŒJ S; JDV C ŒJ S; J% C "2 .J @' /S :
(8.3.10)
As a consequence of (8.3.9), (8.3.10), (8.2.3), (8.1.3), and (8.1.4) we deduce (8.3.4) with 1 AC D A0 C %C ; A0 WD 1 C "2 DV C …D % (8.3.11) and 1 J%C WD "2 1 C "2 ŒJ S; JDV C ŒJ S; J% C "2 .J @' /S X 1 1 1 C "2 P1 J r"P C . X0 / : Adk kŠ .J S/
(8.3.12)
k2
The estimate (8.3.7) for %C follows by (8.3.2), (8.3.3), (8.1.6), and the same arguments used in Lemmata 10.4.2 and 10.4.3. Similarly, we deduce (8.3.8). Now, recalling (4.2.29), (4.2.30) (with F M, G Mc ), (8.1.3), and (8.1.4), we have that the operator A0 in (8.3.11) can be decomposed, with respect to the splitting HM ˚ HMc , as
DV D."; / 0 c C …D % D ; A0 D AM 0 1 C "2 Mc
8.3 Averaging step
197
where, taking in HM the basis f.‰j .x/; 0/; .0; ‰j .x//gj 2M, we have ŒDV M j M Œb % .0/ C Diag C j 2M j 1 C "2
1 (8.1.4) j 0 C "2 Diagj 2M D Diag j 2M 0 j 1 C "2
D."; / D
Bj .0/ C Œb j
1 C "2
! : (8.3.13)
We now prove that D."; / has the form (8.3.6). Lemma 8.3.2 (Shifted normal frequencies). The operator D."; / in (8.3.13) has the form (8.3.6) with j ."; / defined in (8.1.7). Proof. By (8.3.13) and recalling the definition of C in (4.2.21), the operator D."; / is j C "2 bj 1 j Id Tr b ; b WD Bj .0/ ; (8.3.14) D."; / D Diagj 2M 2 j 1 C "2 2 and, by the definition of B in (7.1.25), we have Z 3 j 1=2 2 1=2 b ‰ a.x/.v.'; 0; // Tr Bj .0/ D ; D D ‰ d' j j V V L2 .Td / .2/jSj TjSj (8.3.15) Z 4" d' : C ‰j ; DV1=2 a4 .x/.v.'; 0; //3DV1=2 ‰j L2 .Td / .2/jSj TjSj (8.3.16) We first compute the term (8.3.15). Expanding the expression of v in (3.2.9) we get Z 3 1=2 1=2 D ‰ .x/ a.x/.v.'; 0; //2DV ‰j .x/ d'dx (8.3.15) D j V .2/jSj TjSjCd !2 Z X 1=2p 3 ‰j .x/ ‰j .x/ D a.x/ k 2k cos 'k ‰k .x/ p d'dx p jSj jSjCd j j .2/ T k2S q q X Z 3 1=2 1=2 a.x/ 2 2k2 cos 'k1 cos 'k2 D k 1 k1 k2 .2/jSj TjSjCd k1 ;k2 2S
‰k1 .x/‰k2 .x/
‰j2 .x/
d'dx Z 2 2 1 a.x/ 1 2 ‰ .x/‰ .x/ dx cos2 'k d' k k j j k j
Z 3 X .2/jSj Td TjSj k2S Z Z X 6 1 1 2 2 jSj1 k k a.x/ ‰k .x/‰j .x/dx .2/ cos2 d D .2/jSj j Td T k2S X 1 1 k ‰j2 ; a.x/‰k2 L2 k D 2.B /j ; (8.3.17) D 3j D
k2S
8 Splitting of low-high normal subspaces up to O."4 /
198
using (1.2.9) and (1.2.10). On the other hand, the term in (8.3.16) is equal to zero, because Z Z 3 ‰j .x/ 4" ‰j .x/ (8.3.16) D a4 .x/ v.'; 0; / p d'dx (8.3.18) p jSj j j .2/ Td TjSj and, by (6.2.9), the integral Z Z 3 v.'; 0; / d' D TjSj
TjSj
3 v.' C ; E 0; / d' D
Z TjSj
3 v.'; 0; / d' ;
is equal to zero. In conclusion, we deduce by (8.3.14), (8.3.15), (8.3.16), (8.3.17), and (8.3.18) that bj D .B /j , and, inserting the value WD ./ D "2 A 1 ..1 C "2 /!N " / N defined in (1.2.27), we get that the eigenvalues of D."; / in (8.3.14), are equal to j C "2 bj D j ."; / 1 C "2 with j ."; / defined in (8.1.7).
The new operator !N " @' J AC obtained in Proposition 8.3.1 is in a suitable form to admit, for most values of the parameter , an approximate right inverse according to Proposition 9.2.1 in the next chapter.
Chapter 9
Approximate right inverse in normal directions The goal of this chapter is to state the crucial Proposition 9.2.1 for the existence of an approximate right inverse for a class of quasiperiodic linear Hamiltonian operators, acting in the normal subspace HS? , of the form !N " @' J.A0 C /, where A0 and
are self-adjoint operators, A0 is a split admissible operator according to Definition 9.1.1, and is “small” (see (9.2.2)). We shall use Proposition 9.2.1 to construct the sequence of approximate solutions along the iterative nonlinear Nash–Moser scheme of Chapter 12, more precisely to prove Proposition 12.1.1.
9.1 Split admissible operators We first define the following class of split admissible operators A0 . Definition 9.1.1 (Split admissible operators). Let C1 ; c1 ; c2 > 0 be constants. We denote by C .C1 ; c1 ; c2 / the class of self-adjoint operators A0 ."; ; '/ D
DV C R0 ."; ; '/; 1 C "2
DV D . C V .x//1=2 ;
(9.1.1)
z ƒ, that satisfy acting on HS? , defined for all 2 ƒ 1. jR0 jLip;C;s1 C1 "2 , 2. A0 is block-diagonal with respect to the splitting HS? D HF ˚ HG , i.e., A0 has the form (see (4.2.1))
0 D0 ."; / ; (9.1.2) A0 WD A0 ."; ; '/ D 0 V0 ."; ; '/ and, moreover, in the basis of the eigenfunctions f.‰j ; 0/; .0; ‰j /gj 2F (see (4.1.9)), the operator D0 is represented by the diagonal matrix D0 WD D0 ."; / D Diagj 2F j ."; /Id2 ;
j ."; / 2 R :
The eigenvalues j ."; / satisfy ˇ ˇ ˇj ."; / j ˇ C1 "2 ;
(9.1.3)
(9.1.4)
200
9 Approximate right inverse in normal directions
where j are defined in (1.1.5), and d .i j /."; / c2 "2 or d .i j /."; / c2 "2 ;
i ¤ j ; (9.1.5)
(9.1.6) d .i C j /."; / c2 "2 or d .i C j /."; / c2 "2 ; 2 1 2 1 2 8j 2 F; c2 " d j ."; / c2 " or c2 " d j ."; / c2 "2 ; (9.1.7) and, in addition, for all j 2 F, ( d V0 ."; / C j ."; /Id c1 "2 d V0 ."; / j ."; /Id c1 "2 :
(9.1.8)
We now verify that the operator A0 defined in (8.3.5) is a split admissible operator according to Definition 9.1.1. Lemma 9.1.2 (A0 is split admissible). There exist positive constants C1 ; c1 ; c2 such that the operator A0 defined in (8.3.5) and (8.3.6) belongs to the class C .C1 ; c1 ; c2 / of split admissible operators introduced in Definition 9.1.1. Notice that A0 is defined for all 2 ƒ. Proof. Recall that the decomposition in (8.3.5) refers to the splitting HS? D HM ˚ HMc , where the finite set M F has been fixed in Lemma 8.1.1, whereas the decomposition in (9.1.2) concerns the splitting HS? D HF ˚ HG . By (8.3.5) the operator A0 has the form (9.1.1) with R0 D …D % and (4.3.35) and (8.1.6) imply jR0 jLip;C;s1 D j…D %jLip;C;s1 C1 "2 : In addition, with respect to the splitting HS? D HF ˚ HG , and taking in HF the basis of the eigenfunctions f.‰j ; 0/; .0; ‰j /gj 2F , the operator A0 in (8.3.5) and (8.3.6) has the form (9.1.2) and (9.1.3) (recall that F M) with D0 ."; / D Diagj 2F j ."; /Id2 ;
j ."; / D j ."; / ;
and the operator V0 , which acts in HG , admits the decomposition, with respect to the splitting HG D HMnF ˚ HMc , and taking in HMnF the basis of the eigenfunctions f.‰j ; 0/; .0; ‰j /gj 2MnF ,
0 Diagj 2MnF j ."; /Id2 c : (9.1.9) V0 D AM 0 Mc By (8.1.7) and (1.2.25) we have, for all j 2 M, j ."; / D j .0; / C O."2 /, where j .0; / D j are the unperturbed linear frequencies defined in (1.1.5), and the derivative of the functions j ."; / (which are defined for all 2 ƒ) is "2 1 BA N ; 8j 2 M : (9.1.10) @ j ."; / D j j .1 C "2 /2
201
9.2 Approximate right inverse
Then, by (9.1.10) and the definition of F and G in (1.2.15), for any j 2 M n F G, we have @ j C max j@ j j j 2F
ˇ ˇˇ ˇ 1 1 D j BA N j max ˇj BA N j ˇ 2 j 2F 1 C "2 "2
"2 < 2 c 1 C "2
(9.1.11)
for some c > 0. By (9.1.11) (recall that j ."; / D j ."; /) and (8.1.9) we conclude that, for " small, property (9.1.8) holds for some c1 > 0. The other properties (9.1.5)–(9.1.7) follow, for " small, by (9.1.10) and the assumptions (1.2.21) and (1.2.22).
9.2 Approximate right inverse The main result of this chapter is the following proposition, which provides an approximate solution of the linear equation Lh D g
where L WD !N " @' J.A0 C / :
Proposition 9.2.1 (Approximate right inverse in normal directions). Let !N " 2 RjSj be . 1 ; 1 /-Diophantine and satisfy property .NR/1 ;1 in Definition 5.1.4 with 1 ; 1 fixed in (1.2.28). Fix s1 > s0 according to Proposition 5.1.5 (more precisely Proposition 5.3.4), and s1 < s2 < s3 such that .i/ s2 s1 > 300. 0 C 3s1 C 3/;
.ii/ s3 s1 3.s2 s1 / ;
(9.2.1)
where the constant 0 appears in the multiscale Proposition 5.1.5 and we assume that
0 > 2 C 3. Then there is "0 > 0 such that, 8" 2 .0; "0 /, for each self-adjoint operator A0 D
DV C R0 1 C "2
acting in HS? , belonging to the class C .C1 ; c1 ; c2 / of split admissible operators (see z Definition 9.1.1), for any self-adjoint operator 2 L2 .TjSj ; L.HS? //, defined in ƒ ƒ, satisfying j jLip;C;s1 "3 ;
jR0 jLip;C;s2 C j jLip;C;s2 "1 ;
z 1=2 5=6, satisfying there are closed subsets ƒ."I ; A0 ; / ƒ,
(9.2.2)
202
9 Approximate right inverse in normal directions
1. ƒ."I ; A0 ; / ƒ."I 0 ; A0 ; /, for all 1=2 0 5=6, ˇ ˇ z ˇ b."/ where lim b."/ D 0, 2. ˇŒƒ."I 1=2; A0 ; /c \ ƒ "!0
3. if
A00
2
D .1 C " /
1
2 C .C1 ; c1 ; c2 / and 0 satisfy ˇ 0 ˇ ˇ ˇ 0 ˇ ˇ ˇR R0 ˇ C ı "3 ; 0 C;s C;s DV C
R00
1
(9.2.3)
1
z \ƒ z 0 ƒ, then, for all .1=2/ C ı2=5 5=6, for all 2 ƒ ˇ ˇˇ c ˇz0 \ ƒ "I ı2=5 ; A0 ; ˇ ı˛=3 I ˇƒ \ ƒ "I ; A00 ; 0
(9.2.4)
and, for any 2 .0; "/, there exists a linear operator s 1 ? 3 L1 approx WD Lapprox; 2 L H \ HS z ! Hs3 \ HS? satisfying such that, for any function gW ƒ jR0 jLip;C;s3 C j jLip;C;s3 C kgkLip;s3 "2 1 ;
kgkLip;s1 "2 ;
(9.2.5)
s3 ? the function h WD L1 approx g, hW ƒ."I 5=6; A0 ; / ! H \ HS satisfies 4
11
khkLip;s3 "2 10 ;
khkLip;s1 "2 5 ;
(9.2.6)
and .!N " @' J.A0 C //h D g C r with
(9.2.7)
krkLip;s1 "2 3=2 : 0
0
(9.2.8) 0
Furthermore, setting Q WD 2. C &s1 / C 3 (where & D 1=10 and is given by 0 Proposition 5.1.5), for all g 2 Hs0 CQ \ HS? , 1 L (9.2.9) approx g Lip;s .s1 kgkLip;s0 CQ0 : 0
The required bounds in (9.2.2)–(9.2.5) will be verified along the nonlinear Nash– Moser scheme of Chapter 12. Proposition 9.2.1 will be applied to the operator !N " @' J AC, where AC D A0 C %C is defined in Proposition 8.3.1, to prove Proposition 12.1.1. Notice that A0 is split admissible by Lemma 9.1.2 and the coupling operator %C satisfies j%C jLip;C;s1 .s1 "4 "3 by (8.3.7) and (7.1.26). Proposition 9.2.1 is proved in Chapter 11 using the results of Chapters 5 and 10. We remark that the value of "0 given in Proposition 9.2.1 may depend on s3 , because, in the estimates of the proof, there are quantities of the form C.s3 /"a , a > 0, and we choose " small so that C.s3 /"a < 1. However, such terms appear by quantities C.s3 / a1 , a1 > 0, and so we may require C.s3 / a1 < 1 assuming only small enough, i.e., z.s3 / (see Remark 11.4.1). As a result, the following more specific statement holds.
9.2 Approximate right inverse
203
Proposition 9.2.2. The conclusion of Proposition 9.2.1 can be modified from (9.2.4) in the following way. For any s s3 , there is z.s/ > 0 such that: for any 0 < < min."; z.s//, there exists a linear operator s 1 ? L1 approx WD Lapprox;;s 2 L H \ HS z ! Hs \ HS? satisfying such that, for any function gW ƒ kgkLip;s1 "2 ;
jR0 jLip;C;s C j jLip;C;s C kgkLip;s "2 1 ;
(9.2.10)
s ? the function h WD L1 approx g, hW ƒ."I 5=6; A0 ; / ! H \ HS satisfies 4
khkLip;s1 "2 5 ;
11
khkLip;s "2 10 ;
(9.2.11)
and with
.!N " @' J.A0 C //h D g C r
(9.2.12)
krkLip;s1 "2 3=2 :
(9.2.13)
Furthermore, setting Q0 WD 2. 0 C &s1 / C 3 (where & D 1=10 and 0 is given by 0 Proposition 5.1.5), for all g 2 Hs0 CQ \ HS? , 1 L (9.2.14) approx g Lip;s .s1 kgkLip;s0 CQ0 : 0
Chapter 10
Splitting between low-high normal subspaces The main result of this chapter is Corollary 10.1.2 below. Its goal is to block diagonalize a quasiperiodic Hamiltonian operator of the form !N " @' J.A0 C / according to the splitting HS? D HF ˚ HG , up to a very small coupling term. See the conjugation (10.1.34) where n is small according to (10.1.32).
10.1 Splitting step and corollary The proof of Corollary 10.1.2 is based on an iterative application of the following Proposition. Proposition 10.1.1 (Splitting step). Let !N " 2 RjSj be . 1 ; 1 /-Diophantine and satisfy property .NR/1 ;1 in Definition 5.1.4 with 1 ; 1 fixed in (1.2.28). Assume s2 s1 > 120. 0 C 3s1 C 3/, i.e., condition (9.2.1)-(i). Then, given C1 > 0, c1 > 0, c2 > 0, there is "0 > 0 such that, 8" 2 .0; "0 /, for each self-adjoint operator
DV 0 D0 ."; / A0 D D C R 0 0 V0 ."; ; '/ 1 C "2 belonging to the class C .C1 ; c1 ; c2 / of split admissible operators of Definition 9.1.1 z with (see (9.1.1) and (9.1.2)), defined in ƒ, D0 ."; / D Diagj 2F j ."; /Id2 ;
j ."; / 2 R ;
as in (9.1.3), there are z 1=2 1, satisfying the properties Closed subsets ƒ."I ; A0 / ƒ, 1. ƒ."I ; A0 / ƒ."I 0 ; A0 / for all 1=2 0 1, ˇ ˇ z ˇ b."/ where lim b."/ D 0, 2. ˇŒƒ."I 1=2; A0 /c \ ƒ "!0
3.
if A00 5=2
D .1 C " / 2 C .C1 ; c1 ; c2 / p with jR00 R0 jC;s1 ı z \ ƒ , then, for all .1=2/ C ı 1, " for all 2 ƒ p ˇ 0 ˇ ˇƒ z \ Œƒ."I ; A00 /c \ ƒ."I ı; A0 /ˇ ı˛ ; ˛ > 0 ; (10.1.1) 2
1
DV C R00 z0
z satisfying and, for each self-adjoint operator acting in HS? , defined for 2 ƒ, j jLip;C;s1 ı1 "3 ; there exist
ı1 .jR0 jLip;C;s2 C j jLip;C;s2 / " ;
(10.1.2)
206
10 Splitting between low-high normal subspaces
A symplectic linear invertible transformation eJ S .'/ 2 L HS? ; ' 2 TjSj ;
(10.1.3)
defined for all 2 ƒ."I 1; A0 /, where S .'/ WD S ."; /.'/ is a self-adjoint operator in L.HS? /, satisfying the estimates (10.1.7) and (10.1.8) below; A self-adjoint operator AC of the form C
DV 0 D0 ."; / C C C C R0 C D C C ; (10.1.4) A D 0 V0C ."; ; '/ 1 C "2 defined for all 2 ƒ."I 1; A0 /, with R0C ; C L2 -self-adjoint;
D0C ."; / D Diagj 2F C j ."; /Id2 ;
(10.1.5)
and C j ."; / 2 R; such that, for all 2 ƒ."I 1; A0 /, !N " @' JA0 J eJ S .'/ D eJ S .'/ !N " @' JAC ;
(10.1.6)
and the following estimates hold: The self-adjoint operator S satisfies 7
jS jLip;s1 C1 ı18 ;
14
jS jLip;s2 C1 ı1
3 jR0 jLip;C;s2 C j jLip;C;s2 C ı1 4 ; (10.1.7)
and more generally, for all s s2 , ss 3& s s2 14 3 4 jS jLip;sC1 C.s/ ı1 jR0 jLip;C;s C j jLip;C;s C ı1 ı1 2 1 I (10.1.8) The operators in (10.1.4) and (10.1.5) satisfy 3=4 jC D o "2 ; j ."; / j ."; /jLip ı1 kV0C V0 kLip;0 ı13=4 D o "2 ;
(10.1.9) (10.1.10)
and jR0C R0 jLip;C;s1
jR0C jLip;C;s2 C j C jLip;C;s2
(10.1.2)
j C jLip;C;s1 ı1 =2 ; (10.1.11) ı11=4 jR0 jLip;C;s2 C j jLip;C;s2 C ı13=4 (10.1.12) 3=4
3=2
ı1 ;
3=2
"ı1
;
(10.1.13)
10.1 Splitting step and corollary
207
and, more generally, for all s s2 , 1 4
ss2 3 3& s s jR0 jLip;C;s C j jLip;C;s C ı1 4 ı1 2 1 (10.1.14) where & D 1=10 is fixed in (5.1.16).
jR0C jLip;C;s C j C jLip;C;s .s ı1
At last, given A00 D .1 C "2 /1 DV C R00 2 C .C1 ; c1 ; c2 / and 0 satisfying (10.1.2), then, for all 2 ƒ."I 1; A0 / \ ƒ."I 1; A00 /, we have the estimates ˇ 0C ˇ 0 ˇ 0 ˇ ˇ ˇ ˇR RC ˇ ˇ ˇ ˇ ˇ 0 0 C;s1 R0 R0 C;s1 C C C;s1 ˇ 0C ˇ ˇ 1=2 ˇ 1=20 0 ˇ C ˇ ı1 ˇR00 R0 ˇC;s C ı1 j jC;s1 : C;s 1
1
(10.1.15) (10.1.16)
Note that according to (10.1.6), (10.1.4), and (10.1.11), the new coupling term C is much smaller than , in low norm j jLip;C;s1 . Moreover, by (10.1.13), the new C 3=2 also satisfies the second assumption (10.1.2) with ı1 instead of ı1 . Notice also that the Cantor sets ƒ."I ; A0 / depend only on A0 and not on . Finally we point out that (10.1.15) and (10.1.16) will be used for the measure estimate of Cantor sets of “good” parameters , in relation with property 3 (see (10.1.1)): for this application, an estimates of low norms j jC;s1 , without the control of the Lipschitz dependence, is enough. Applying iteratively Proposition 10.1.1 we deduce the following corollary. Corollary 10.1.2 (Splitting). Let !N " 2 RjSj be . 1 ; 1 /-Diophantine and satisfy property .NR/1 ;1 in Definition 5.1.4 with 1 ; 1 fixed in (1.2.28). Let A0 D
DV C R0 1 C "2
be a self-adjoint operator in the class of split admissible operators C .C1 ; c1 ; c2 / (see z Definition 9.1.1) and be a self-adjoint operator in L.HS? /, defined for 2 ƒ, satisfying (9.2.2). Then there exist z 1=2 5=6, satisfying the properties Closed sets ƒ1 ."I ; A0 ; / ƒ, 1. ƒ1 ."I ; A0 ; / ƒ1 ."I 0 ; A0 ; / for all 1=2 0 5=6, ˇ ˇ c z ˇ b1 ."/ where lim b1 ."/ D 0, 2. ˇ ƒ1 ."I 1=2; A0; / \ ƒ "!0
3. if
A00
2
D .1 C " /
1
DV C
R00
2 C .C1 ; c1 ; c2 / and 0 satisfy
jR0 R00 jC;s1 C j 0 jC;s1 ı "2
(10.1.17)
z \ƒ z 0 , then, for all .1=2/ C ı2=5 5=6, for all 2 ƒ ˇ ˇ 0 ˇƒ z \ Œƒ1 ."I ; A00 ; 0 /c \ ƒ1 ."I ı2=5 ; A0 ; /ˇ ı˛=2 I (10.1.18)
208
10 Splitting between low-high normal subspaces
A sequence of symplectic linear invertible transformations
P0 WD Id;
Pn WD Pn ."; /.'/ D eJ S1 .";/.'/ eJ Sn .";/.'/ ;
defined for all 2 ƒ1 ."I 5=6; A0 ; /, acting in HS? , satisfying, for ı1 D "3 ;
n 1; (10.1.19) (10.1.20)
the estimates .3=2/n1 3 4
˙1 jLip;C;s1 ı1 for n 1; jPn˙1 Pn1
;
3 4
jPn˙1 jLip;C;s2
jPn˙1 IdjLip;C;s1 2ı1 ; 12 .3=2/n1 3 4 .C.s2 //n ı1 "ı1 C 1 ;
(10.1.21) (10.1.22)
and, more generally, for all s s2 , n1
. 3 / . 34 C˛.s// jPn˙1 jLip;C;s .C.s//nı1 2 1 C 2˛.s/ 3 jR0 jLip;C;s C j jLip;C;s ı12 C1 ;
where ˛.s/ WD 3&
s s2 I s2 s1
(10.1.23)
(10.1.24)
A sequence of self-adjoint block-diagonal operators of the form
DV 0 Dn ."; / An D C Rn WD ; n 1; 0 Vn ."; ; '/ 1 C "2
(10.1.25)
defined for 2 ƒ1 ."I 5=6; A0 ; /, belonging to the class C .2C1 ; c1 =2; c2 =2/ of split admissible operators, with Dn ."; / D Diagj 2F .n/ j ."; /Id2 ;
(10.1.26)
satisfying 3=4 .n/ D o "2 ; jj ."; / j ."; /jLip D O ı1 3=4 D o "2 ; kVn V0 kLip;0 D O ı1 2
jRn jLip;C;s1 C1 " C jRn jLip;C;s2
3=4 2ı1
(10.1.27) (10.1.28) 2
2C1 " ; 3 n1 3 12 n . 2 / 4 "ı1 C 1 ; .C.s2 // ı1
(10.1.29) (10.1.30)
209
10.1 Splitting step and corollary
and, more generally, for all s s2 , . 3 /n1 . 3 C˛.s//
4 jRn jLip;C;s .C.s//n ı1 2 12 C 2˛.s/ 3 jR0 jLip;C;s C j jLip;C;s ı1 C1 ;
(10.1.31)
where ˛.s/ is defined in (10.1.24); a sequence of L2 self-adjoint operators n 2 L.HS? /, n 1, defined for 2 ƒ1 ."I 5=6; A0; /, satisfying n n1 h i 3 3 3 1 2 4 n 2 j n jLip;C;s1 ı1 "ı1 2 C 1 ; ; j n jLip;C;s2 .C.s2 // ı1 (10.1.32) and, more generally, for all s s2 , n1 j n jLip;C;s
.C.s//n ı1
3 2
h
3 4 C˛.s/
1 C 2˛.s/ i jR0 jLip;C;s Cj jLip;C;s ı12 3 C1 ; (10.1.33)
where ˛.s/ is defined in (10.1.24); such that, for all 2 ƒ1 ."I 5=6; A0 ; /, !N " @' JA0 J Pn .'/ D Pn .'/ !N " @' JAn J n :
(10.1.34)
At last, given A00 D
DV C R00 2 C .C1 ; c1 ; c2 / 1 C "2
and a self-adjoint operator 0 2 L.HS? / satisfying (9.2.2), if jA00 A0 jC;s1 C j 0 jC;s1 ı "3 ; then, for all 2 ƒ1 ."I 5=6; A0 ; / \
z \ƒ z0 ; 8 2 ƒ
ƒ1 ."I 5=6; A00 ; 0 /,
(10.1.35)
for all n 2 N,
jA0n An jC;s1 ı4=5 :
(10.1.36)
Note that each operator An in (10.1.25) is block-diagonal according to the splitting HS? D HF ˚HG in (4.1.5), i.e., it has the same form as A0 in (9.1.2), but the coupling term n in (10.1.34) is much smaller than (in the low norm j jLip;C;s1 ). Compare the first inequality in (10.1.32) (where ı1 D "3 by (10.1.20)) and the first inequality in (9.2.2). The tame estimates (10.1.23) and (10.1.33) for all s s2 , will be used in the proof of Proposition 9.2.1, which provides an approximate right inverse required in the Nash–Moser nonlinear iteration in Chapter 12. Proof. Let us define the sequences of real numbers .ın /n1 and . n /n0 by n n1 3
ın WD ı1 2
; ı1 D "3 ;
and 0 WD 0;
3
3
3
8 nC1 WD n C ınC1 D n C ı18 2 : (10.1.37)
210
10 Splitting between low-high normal subspaces
We shall prove by induction the following statements: for any n 2 N .P/n there exist: (i) Symplectic linear invertible transformations P0 ; : : : ; Pn of the form (10.1.19), defined respectively for in decreasing subsets z; ƒn ."I 5=6; A0 ; / ƒ1 ."I 5=6; A0 ; / ƒ0 WD ƒ satisfying (10.1.21)–(10.1.23) at any order k D 0; : : : ; n. The sets ƒn ."I ; A0 ; /, 1=2 5=6, are defined inductively by z and ƒn ."I ; A0 ; / WD ƒ0 WD ƒ
n1 \
ƒ."I C k ; Ak /;
n 1;
kD0
(10.1.38) where the sets ƒ."I C k ; Ak / are those defined by Proposition 10.1.1. Notice that for " small enough, k 1=6 for any k 0 (and so C k 1). (ii) Self-adjoint split admissible operators A0 ; : : : ; An as in (10.1.25) and (10.1.26), in the class C .2C1 ; c1 =2; c2 =2/ (see Definition 9.1.1), satisfying (10.1.27)–(10.1.31) at any order k D 0; : : : ; n, and such that the conjugation identity (10.1.34) holds for all in ƒn ."I 5=6; A0 ; /, with
k satisfying (10.1.32) and (10.1.33) for k D 0; : : : ; n. (iii) Moreover we have, 8n 1, n1 3
3
jAn An1 jLip;C;s1 ın4 D ı14
3 2
on ƒn ."I 5=6; A0; / : (10.1.39)
Initialization. The statement .P/0 -(i) holds with P0 WD Id, and (10.1.21)–(10.1.23) trivially hold. The conjugation identity (10.1.34) at n D 0 trivially holds with
0 WD . Then, in order to prove .P/0 -(ii), it is sufficient to notice that the self-adjoint operator A0 2 C .C1 ; c1 ; c2 / has the form (10.1.25) and (10.1.26) with .0/ j ."; / D j ."; /, and (10.1.27)-(10.1.29) hold. The estimate (10.1.30) (which for n D 0 is 1=2 1=2 C 1) and (10.1.32) are consequences of Assumption jR0 jLip;C;s2 ı1 Œ"ı1 3 (9.2.2), since ı1 D " . Finally notice that (10.1.31) and (10.1.33) are trivially satisfied. Induction. Next assume that .P/n holds. In order to define PnC1 and AnC1 we apply the “splitting step” Proposition 10.1.1 with A0 ; replaced by An ; n . In fact, by the inductive assumption .P/n , the operator An in (10.1.25) belongs to the class C .2C1 ; c1 =2; c2 =2/ of split admissible operators, according to Definition 9.1.1. Moreover, by (10.1.32) and (10.1.30), we have n 3 (10.1.37) j n jLip;C;s1 ı1 2 D ınC1 ; ınC1 jRn jLip;C;s2 C j n jLip;C;s2 " ; (10.1.40) which is (10.1.2) with n , Rn , ınC1 instead of , R0 , ı1 .
10.1 Splitting step and corollary
211
We define, for 1=2 5=6, the set ƒnC1 ."I ; A0 ; / WD ƒn ."I ; A0 ; / \ ƒ "I C n ; An
(10.1.41)
in agreement with (10.1.38) at n C 1. By (10.1.40), Proposition 10.1.1 implies the existence of a self-adjoint operator SnC1 2 L.HS? /, defined for 2 ƒnC1 ."I 5=6; A0 ; / ƒ "I 1; An , satisfying (see (10.1.7) and (10.1.8)) 7
7 3 n . /
8 jSnC1 jLip;s1 C1 ınC1 D ı18 2 ; 14 34 jSnC1 jLip;s2 C1 ınC1 ; jRn jLip;C;s2 C j n jLip;C;s2 C ınC1 1 ss 3& s s2 4 3 4 jSnC1 jLip;sC1 C.s/ ınC1 jRn jLip;C;s C j n jLip;C;s C ınC1 ınC1 2 1 ;
(10.1.42) such that, for any 2 ƒnC1 ."I 5=6; A0 ; /, we have !N " @' JAn J n eJ SnC1 D eJ SnC1 !N " @' JAnC1 J nC1 ; (10.1.43) where the operator DV (10.1.44) C RnC1 1 C "2 is block-diagonal as in (10.1.25) and (10.1.26). By (10.1.11)–(10.1.13), and (10.1.14), we have AnC1 D
3=4
3 3 n . /
jRnC1 Rn jLip;C;s1 ınC1 D ı14 2 ; 1 . 3 /nC1 ; j nC1 jLip;C;s1 ı1 2 2 1 34 4 ; jRn jLip;C;s2 C j n jLip;C;s2 C ınC1 jRnC1 jLip;C;s2 C j nC1 jLip;C;s2 ınC1 ss2 1 34 3& s2 s1 4 jRnC1 jLip;C;s C j nC1 jLip;C;s .s ınC1 ınC1 : jRn jLip;C;s C j n jLip;C;s C ınC1 (10.1.45) In particular, we have the first bound in (10.1.32) at the step n C 1. The symplectic transformation PnC1 . We define, for 2 ƒnC1 ."I 5=6; A0 ; /, the symplectic linear invertible transformation
PnC1 WD Pn eJ SnC1
(10.1.19)
D
eJ S1 eJ Sn eJ SnC1 ;
(10.1.46)
which has the form (10.1.19) at order n C 1. By the inductive assumption .P/n , the conjugation identity (10.1.34) holds for all 2 ƒn ."I ; A0 ; /, and we deduce, by
212
10 Splitting between low-high normal subspaces
(10.1.43), that, for all in the set ƒnC1 ."I ; A0 ; / defined in (10.1.41), we have !N " @' JA0 J PnC1
!N " @' JA0 J Pn eJ SnC1 (10.1.34) D Pn !N " @' JAn J n eJ SnC1 (10.1.43) D Pn eJ SnC1 !N " @' JAnC1 J nC1 (10.1.46) D PnC1 !N " @' JAnC1 J nC1 ;
(10.1.46)
D
which is (10.1.34) at the step n C 1. We have PnC1 Pn D Pn .eJ SnC1 Id/. By (10.1.42), using (4.3.44), (4.3.45), and the definition of ınC1 in (10.1.37), 7=8 : jeJ SnC1 IdjLip;C;s1 . jJ SnC1 jLip;C;s1 . jJ SnC1 jLip;s1 C1 . ınC1
(10.1.47)
Moreover, by the second inequality of (10.1.21), jPn jLip;C;s1 2. Hence 3=4 ; jPnC1 Pn jLip;C;s1 ınC1
which is the first inequality in (10.1.21) at the step n C 1 (we obtain in the same way 1 ). As a consequence, the estimate for PnC1 jPnC1 IdjLip;C;s1
nC1 X
3=4
ık
3=4
2ı1
;
kD1
which is the second inequality in (10.1.21) at the step n C 1 (we obtain in the same 1 way the estimate for PnC1 ). Estimates (10.1.22) and (10.1.23) at the step n C 1 are proved below. AnC1 in (10.1.44) is a split admissible operator in C .2C1 ; c1 =2 ; c2 =2/. See Definition 9.1.1. By (10.1.45) we have n 3
3
3 2
4 jAnC1 An jLip;C;s1 D jRnC1 Rn jLip;C;s1 ınC1 D ı14
on ƒnC1 ."I 5=6; A0 ; /, which is (10.1.39) at the step n C 1. As a consequence, jRnC1 R0 jLip;C;s1 D jAnC1 A0 jLip;C;s1
nC1 X
3 3 k1 .2/
ı14
3=4
2ı1
(10.1.48)
kD1
and (10.1.29) at order n C 1 follows: in fact, jR0 jLip;C;s1 C1 "2 by item 1 of Definition 9.1.1 and 3=4
jRnC1 jLip;C;s1 jR0 jLip;C;s1 C 2ı1
C1 "2 C 2"9=4 2C1 "2
10.1 Splitting step and corollary
213
for " small enough, since ı1 D "3 . Recalling (10.1.25), the estimate (10.1.28) at order n C 1 is also a direct consequence of (10.1.48), as well as 3=4
kDnC1 D0 kLip;0 D O.ı1 / D o."2 / ; which implies (10.1.27) at order n C 1. Now, since A0 2 C .C1 ; c1 ; c2 / (Definition 9.1.1), either d .i j /."; / c2 "2 or d .i j /."; / c2 "2 , see (9.1.5). Thus, by (10.1.27) at order n C 1, we .nC1/ /."; / c2 "2 =2 in the first case and d ..nC1/ deduce that d ..nC1/ i j i .nC1/
/."; / c2 "2 =2 in the second case, for " small enough. In a similar way (10.1.27) provides properties (9.1.6) and (9.1.7) with constant c2 =2 instead of c2 at order n C 1, and (10.1.27) and (10.1.28) provide (9.1.8) with constant c1 =2 instead of c1 at order n C 1.
j
Estimates of jRnC1 jLip;C;s and jnC1 jLip;C;s . We first consider the case s D s2 . By (10.1.45), (10.1.30), (10.1.32), and recalling the definition of ınC1 in (10.1.37), 14 3 4 jRn jLip;C;s2 C j n jLip;C;s2 C ınC1 jRnC1 jLip;C;s2 C j nC1 jLip;C;s2 ınC1 1 3 4 .C.s2 //nC1 ınC1 "ı1 2 C 1 ; which gives (10.1.30) and the second bound in (10.1.32) at the step n C 1. To estimate the s-norms for any s s2 , let us introduce the notation s s2 as in (10.1.24) : un .s/ WD jRn jLip;C;s C j n jLip;C;s and ˛.s/ WD 3& s2 s1 (10.1.49) By (10.1.45), we have the inductive bound h 1 i 3 ˛.s/ 4 4 unC1 .s/ D jRnC1 jLip;C;s C j nC1 jLip;C;s C 0 .s/ ınC1 un .s/ C ınC1 ınC1 : Then, since (10.1.31) and (10.1.33) hold at order n, we obtain i h 1 1 3 ˛.s/ C 2˛.s/ 3 ˛.s/ 4 3 4 u0 .s/ı12 .C.s//n ın 4 C 1 C ınC1 ınC1 unC1 .s/ C 0 .s/ ınC1 1 3 C 2˛.s/ 4 ˛.s/ .C.s//nC1 ınC1 u0 .s/ı12 3 C 1 for C.s/ large enough, and using that ınC1 D ın3=2 . Hence the estimates (10.1.31) and (10.1.33) are proved also at order n C 1. ˙1 Estimates of jPnC1 jLip;C;s . We first consider the case s D s2 . By (4.3.26) and the first estimate in (10.1.42) we obtain X .C.s2 //k1 jSnC1 jk1 jeJ SnC1 jLip;s2 C1 1 C C.s2 / Lip;s1 jSnC1 jLip;s2 C1 kŠ k1
0
1 C C .s2 /jSnC1 jLip;s2 C1 :
(10.1.50)
214
10 Splitting between low-high normal subspaces
Hence by (10.1.46) and (4.3.24), we get jPnC1 jLip;C;s2 C.s2 / jPn jLip;C;s2 jeJ SnC1 jLip;s1 C1 CjPn jLip;C;s1 jeJ SnC1jLip;s2 C1 (10.1.47);(10.1.50) C.s2 / jPn jLip;C;s2 C jPn jLip;C;s1 jSnC1 jLip;s2 C1 (10.1.51) for some new constant C.s2 /. Therefore, by (10.1.51), properties (10.1.22), (10.1.21) at order n, the second inequality in (10.1.42), and recalling the definition of ın in (10.1.37), we have 1 3 jPnC1 jLip;C;s2 C.s2 / .C.s2 //n ın 4 "ı1 2 C 1 1 3 4 4 jRn jLip;C;s2 C j n jLip;C;s2 C ınC1 C 2ınC1 1 (10.1.30);(10.1.32) 3 4 .C.s2 //nC1 ınC1 "ı1 2 C 1 provided the constant C.s2 / is large enough, proving (10.1.22) at the step n C 1. We 1 obtain in the same way the estimate for PnC1 . Now, for any s s2 , we derive by the last inequality in (10.1.42), (10.1.49), (10.1.31), and (10.1.33), i h 1 3 4 4 ˛.s/ jSnC1 jLip;sC1 C.s/ ınC1 un .s/ C ınC1 1 34 ˛.s/ C 2˛.s/ 3 C1 : (10.1.52) .C.s//nC1 ınC1 u0 .s/ı12 As in the case s D s2 (see (10.1.50)) we obtain jeJ SnC1 jLip;sC1 1 C C.s/jSnC1 jLip;sC1 : ˙1 jLip;C;s , exactly as Since PnC1 D Pn eJ SnC1 we derive the bound (10.1.23) on jPnC1 in the case s D s2 , using the interpolation inequality (4.3.24), the inductive assumptions (10.1.23), (10.1.21), (10.1.52), and taking the constant C.s/ of (10.1.23) large enough.
This completes the iterative proof of .Pn /n0 . The Cantor-like sets ƒ1 ."I ; A0 ; /. ƒ1 ."I ; A0 ; / WD
1 \ nD0
We define, for 1=2 5=6, the set
ƒn ."I ; A0 ; / D
1 \
ƒ."I C k ; Ak / ;
(10.1.53)
kD0
where ƒn ."I ; A0 ; / are defined in (10.1.38) and ƒ."I C k ; Ak / are defined by Proposition 10.1.1. We recall that the sequence . k / is defined in (10.1.37). The sets ƒ1 ."I ; A0 ; / satisfy property 1 of Corollary 10.1.2 as the sets ƒ."I C k ; Ak / satisfy property 1 of Proposition 10.1.1.
10.1 Splitting step and corollary
215
Proof of property 2 for the sets ƒ1 ."I ; A0 ; /. The complementary set of ƒ1 ."I ; A0 ; / may be decomposed as (the sets ƒn ."I ; A0 ; / in (10.1.38) are decreasing in n) ƒ1 ."I ; A0 ; /c 1 (10.1.53) [ D ƒn ."I ; A0 ; /c nD0 z ƒ0 Dƒ
D
(10.1.38)
D
1 [ ƒk ."I ; A0 ; / \ ƒkC1 ."I ; A0 ; /c
zc [ ƒ
kD0 1 [
zc [ ƒ zc [ ƒ
D
kD0 1 [
c ƒk ."I ; A0 ; / \ ƒk ."I ; A0 ; / \ ƒ."I C k ; Ak / ƒk ."I ; A0 ; / \ ƒ."I C k ; Ak /c
kD0 z ƒ0 Dƒ
D
z c [ ƒ."I ; A0 /c [ ƒ
1 [
ƒk ."I ; A0 ; / \ ƒ."I C k ; Ak /c :
kD1
(10.1.54) By property 2 of Proposition 10.1.1 we have z b."/ jƒ."I 1=2; A0 /c \ ƒj
with lim b."/ D 0 :
(10.1.55)
"!0
Moreover, for k 1, we have, by the definition of ƒk ."I ; A0 ; / in (10.1.38), c \ ƒ "I C k ; Ak D ƒk "I ; A0 ; c \ \ ƒ "I C k1 ; Ak1 : ƒ "I C k ; Ak (10.1.56) ƒk "I ; A0 ; Now, since, for all k 1 we have jAk Ak1 jC;s1 ık3=4 on ƒk (see (10.1.39)) and k D k1 C ık3=8 (see (10.1.37)), we deduce by (10.1.1) and (10.1.56) the Lebesgue measure estimate, ˇ
c ˇ
\ ˇ ˇ 1 ˇƒk "I 1 ; A0 ; ƒ "I C k ; Ak ˇˇ ık3˛=4 ; 8k 1 : (10.1.57) ˇ 2 2 In conclusion, by (10.1.54), (10.1.55), and (10.1.57) we obtain, recalling (10.1.37), k1 1 3˛ 3 X 4 2 c z b."/ C ı jƒ1 ."I 1=2; A0 ; / \ ƒj 1
kD1 3˛
b."/ C 2ı14 D b."/ C 2" since ı1 D "3 . Property 2 of Corollary 10.1.2 is proved.
9˛ 4
DW b1 ."/
216
10 Splitting between low-high normal subspaces
Proof of (10.1.36). Actually we prove that, if A00 2 C .C1 ; c1 ; c2 /, .R00 ; 0 / satisfy (9.2.2) as .R0 ; / and .A00 ; 0 / satisfy (10.1.35), then jA0n An jC;s1 ı4=5 ;
8 2 ƒn ."I 5=6; A0; / \ ƒn ."I 5=6; A00 ; 0 / ; (10.1.58)
where the sets ƒn are defined in (10.1.38). Let 0 .1/ n WD jRn Rn jC;s1 ;
0 .2/ n WD j n n jC;s1 ;
.2/ n WD .1/ n C n : (10.1.59)
z ƒ z 0 . We recall that .AnC1 ; nC1 / The assumption (10.1.35) means that 0 ı on ƒ\ 0 0 (resp. .AnC1 ; nC1 /) is built applying Proposition 10.1.1 to .An ; n / (resp. .A0n ; n0 /) instead of .A0 ; 0 / (resp. .A00 ; 00 /). Hence, by (10.1.15) and (10.1.16) (with ı1 replaced by ınC1 ), for all n 2 N, for all 2 ƒnC1 ."I 5=6; A0; /\ƒnC1 ."I 5=6; A00; 0 /, we have the iterative inequalities .1/ .2/ .1/ nC1 n C Cn
1=2 1=20 .2/ .1/ and .2/ nC1 ınC1 n C ınC1 n :
(10.1.60)
1
19 As a consequence nC1 ınC1 n and, recalling the definition of ın in (10.1.37), we deduce that, for any n 1, for all 2 ƒn ."I ; A0 ; / \ ƒn ."I ; A00 ; 0 /, we have n1 n
n .ın ı1 /
1 19
1 19 1CC
0 ı1
Let n0 1 be the integer such that n0 ı1
3 2
3 2
2 19
ı ı1
3 2
ı:
(10.1.61)
n0 1 3
ı < ı1 2
:
(10.1.62)
Thus, by (10.1.61) and the second inequality in (10.1.62), we get n0 8n n0 ;
2 19
n ı1
3 2
3
16
ı ı 19 ı D ı 19 :
(10.1.63)
On the other hand, recalling (10.1.59) we bound, using the first estimate in (10.1.32), n 3
8n n0 C 1;
2 0 .2/ n j n jC;s1 C j n jC;s1 2ı1
:
(10.1.64)
Applying iteratively the first inequality in (10.1.60) we get, 8n n0 C 1, jA0n An jC;s1 D .1/ n
(10.1.63);(10.1.64)
(10.1.62)
.2/ .2/ .1/ n0 C C n0 C C n1 n0 n1 ! 3
16
3
ı 19 C 2C ı1 2 ı
16 19 16
C C ı1 2
n0 3
C 3C ı1 2 4
ı 19 C 3C ı ı 5
for ı small enough. In conclusion (10.1.63) and (10.1.65) imply (10.1.58).
(10.1.65)
10.1 Splitting step and corollary
217
Proof of property 3 for the sets ƒ1 ."I ; A0 ; /. By (10.1.54) (with A00 ; 0 instead of A0 ; ) and (10.1.53) we deduce, for all .1=2/ C ı2=5 5=6, the inclusion ! [ [ 0 0 0 c 2=5 z \ ƒ1 ."I ; A0 ; / \ ƒ1 ."I ı ; A0 ; / M0 M WD ƒ Mn n1
(10.1.66) where z 0 \ ƒ."I ; A00 /c \ ƒ."I ı2=5 ; A0 / M0 WD ƒ and
Mn WD ƒn ."I ; A00 ; 0 /
\
ƒ."I C n ; A0n /c \ \ ƒn "I ı2=5 ; A0 ; ƒ "I C n ı2=5 ; An :
z \ƒ z 0 , and therefore By the assumption (10.1.17) we have jR00 R0 jC;s1 ı on ƒ property 3 of Proposition 10.1.1 and the fact that ı1=2 ı2=5 imply that, for all .1=2/ C ı2=5 5=6, z 0 \ ƒ."I ; A00 /c \ ƒ."I ı1=2 ; A0 /j ı˛ : jM0 j jƒ
(10.1.67)
For n 1, we have, by (10.1.38), the inclusion
Mn ƒn ."I ; A00 ; 0 / \ ƒ."I C n1 ; A0n1 / \ ƒ."I C n ; A0n /c 0 and therefore, since jRn0 Rn1 jC;s1 D jA0n A0n1 jC;s1 ın3=4 on ƒn ."I ; A00 ; 0 / by (10.1.39), the estimate (10.1.1) and n D n1 C ın3=8 , imply
jMn j ın3˛=4 :
(10.1.68)
On the other hand, by (10.1.58), jAn A0n jC;s1 ı4=5 for any \ ƒn ."I ; A0 ; / 2 Mn ƒn "I ; A00 ; 0 and we deduce, by (10.1.1), the measure estimate \ ˇ ˇ ƒ."I C n ı2=5 ; An /ˇ ı4˛=5 : jMn j ˇƒ."I C n ; A0n /c
(10.1.69)
Finally (10.1.66), (10.1.67), (10.1.68), and (10.1.69) imply the measure estimate X min.ın3˛=4 ; ı4˛=5 / ı˛=2 jMj ı˛ C n1
for ı small, proving (10.1.18). The proof of Corollary 10.1.2 is complete. The rest of the chapter is devoted to the proof of Proposition 10.1.1.
218
10 Splitting between low-high normal subspaces
10.2 The linearized homological equation We consider the linear map
S 7! J !N " @' S C ŒJ S ; J A0 ; where S WD S .'/, ' 2 TjSj , has the form
d.'/ a.'/ S .'/ D 2 L.HS? /; a.'/ 0
d.'/ D d .'/ 2 L.HF /;
(10.2.1)
a.'/ 2 L.HF ; HG / ;
and is self-adjoint. Recalling (9.1.2) and using that D0 and J commute, J 2 D Id, we have J !N " @' S C ŒJ S ; J A0 D
J !N " @' d C D0 d C JdJD0 J !N " @' a C Ja J V0 C D0 a : J !N " @' a J V0 Ja C JaJD0 0
(10.2.2)
The key step in the proof of the splitting Proposition 10.1.1 (see Section 10.4) is, given .'/ of the form
1 .'/ 2 .'/ 2 L.HS? / ;
.'/ D
2 .'/ 0 (10.2.3)
1 .'/ D 1 .'/ 2 L.HF /; 2 .'/ 2 L.HF ; HG / ; to solve (approximately) the “homological” equation J !N " @' S C ŒJ S ; J A0 D J :
(10.2.4)
The equation (10.2.4) amounts to the pair of decoupled equations J !N " @' d C D0 d C JdJD0 D J 1 ; J !N " @' a J V0 Ja C JaJD0 D J 2 :
(10.2.5) (10.2.6)
Note that taking the adjoint equation of (10.2.6), multiplying by J on the left and the right, since V0 and D0 are self-adjoint and JD0J D D0 , J D J , we obtain J !N " @' a C Ja J V0 C D0 a D J 2 ; which is the equation in the top right in (10.2.4), (10.2.2), and (10.2.3). Remark 10.2.1. We shall solve only approximately the homological equation (10.2.4) up to terms that are Fourier supported on high frequencies. The main reason is that the multiscale Proposition 5.1.5 provides tame estimates of the inverses of finitedimensional restrictions of infinite-dimensional operators. This is sufficient for proving Proposition 10.1.1.
10.2 The linearized homological equation
219
We shall decompose an operator of the form (10.2.3), as well as S in (10.2.1), in the following way. The operator 1 2 L.HF / can be represented as a finitej j dimensional self-adjoint square matrix .. 1 /i /i;j 2F with entries . 1 /i 2 L.Hj ; Hi /. Using in each subspace Hj , j 2 F, the basis ..‰j ; 0/; .0; ‰j // (see (4.1.9)), we j identify each operator . 1 /i .'/ 2 L.Hj ; Hi / with a 2 2-real matrix that we still denote by . 1 /ji .'/ 2 Mat2 .R/. We shall also Fourier expand X j j j j j Œ b1 i .`/ei`' ; Œ b1 i .`/ 2 Mat2 .C/ ; Œ b1 i .`/ D Œ b1 i .`/ ; . 1 /i .'/ D `2ZjSj
(10.2.7) where Œ b1 ji .0/ is the average j Œ b1 i .0/
1 D .2/jSj
Z TjSj
j
. 1 /i .'/ d' 2 Mat2 .R/ :
(10.2.8)
The operator 2 2 L.HF ; HG / is identified, as in (4.2.3), with . 2j /j 2F , where 2j 2 L.Hj ; HG /, which, using in Hj the basis ..‰j ; 0/; .0; ‰j //, can be identified with a vector of HG HG . We also recall that for a '-dependent family of operators .'/, ' 2 TjSj , of the form (10.2.3), we have the estimates (4.3.44) and (4.3.45). The next lemma provides an approximate solution of the homological equation (10.2.4). Lemma 10.2.2 (Homological equations). Given C1 > 0, c1 > 0, c2 > 0, there is "1 > 0 such that 8" 2 .0; "1 /, for each split admissible operator A0 2 C .C1 ; c1 ; c2 / z z there are closed subsets ƒ."I ; A0 / ƒ, (see Definition 9.1.1), defined for 2 ƒ, 1=2 1, satisfying the properties 1–3 of Proposition 10.1.1, such that, if 2 z and satisfies L.HS? / has the form (10.2.3), is L2 self-adjoint, defined for 2 ƒ, 9
j jLip;C;s1 ı110 ;
ı1 "3 ;
11
ı110 .jR0 jLip;C;s2 C j jLip;C;s2 / " ; z; Œ b1 jj .0/ 2 M .recall (10.2.8) and (4.2.18)/; 8j 2 F; 8 2 ƒ
(10.2.9) (10.2.10)
then there is a linear self-adjoint operator S WD S ."; /.'/ 2 L.HS? / of the form (10.2.1), defined for all 2 ƒ."I 1; A0 /, satisfying (10.1.7) and (10.1.8), such that 7
jJ !N " @' S C ŒJ S ; J A0 J jLip;C;s1 ı14 ; 14
jJ !N " @' S C ŒJ S ; J A0jLip;C;s2 ı1
(10.2.11)
3
jR0 jLip;C;s2 C j jLip;C;s2 C ı1 4 ; (10.2.12)
and, more generally, for all s s2 , ˇ ˇ ˇJ !N " @' S C ŒJ S ; J A0ˇ Lip;C;s ss2 1 3 3& s s C.s/ ı1 4 jR0 jLip;C;s C j jLip;C;s C ı1 4 ı1 2 1 :
(10.2.13)
220
10 Splitting between low-high normal subspaces
At last, denoting by SA0 ; and SA00 ;0 the operators defined as above associated, respectively, to .A0 ; / and .A00 ; 0 /, we have, for all 2 ƒ."I 1; A0 / \ ƒ."I 1; A00 /, 3ˇ 1 ˇ ˇ ˇ ˇ ˇ 30 0ˇ 4ˇ ˇ 0 ˇ ˇSA ; SA0 ;0 ˇ ı A C ı ; A 0 0 0 1 1 C;s C;s C;s 0 1
and
ˇ ˇ.J !N " @' SA
0 ;
1
(10.2.14)
1
C ŒJ SA0 ; ; J A0 J / ˇ J !N " @' SA00 ;0 C ŒJ SA00;0 ; J A0 J 0 ˇC;s
(10.2.15)
1
3 4
ı1 jA0 A00 jC;s1 C
1 ı1 30 j
0 jC;s1 :
The proof of Lemma 10.2.2 is given in the next section.
10.3 Solution of homological equations: proof of Lemma 10.2.2 Step 1: approximate solution of the homological equation (10.2.5). We represent a linear operator d.'/ 2 L.HF / by a finite-dimensional square matrix .dij .'//i;j 2F with entries dij .'/ 2 L.Hj ; Hi / ' Mat2 .R/. Since the symplectic operator J leaves invariant each subspace Hj and D0 D Diagj 2F j ."; /Id2 (see (9.1.3)), the equation (10.2.5) is equivalent to j
j
j
j
J !N " @' di .'/ C i ."; /di .'/ C j ."; /Jdi .'/J D J. 1 /i .'/ ;
0 1 8i; j 2 F where J D ; 1 0 and, by a Fourier series expansion with respect to the variable ' 2 TjSj , writing X j j j j j b d i .`/ei`' ; b d i .`/ D b d i .`/ 2 Mat2 .C/; b d i .`/ ; (10.3.1) di .'/ D `2ZjSj
to j j j j d i .`/ C i ."; /b d i .`/ C j ."; /J b d i .`/J D J Œ b1 i .`/ ; i.!N " `/J b
8i; j 2 F; ` 2 ZjSj :
(10.3.2)
In order to solve (10.3.2) we have to study the linear operator Tij ` W Mat2 .C/ ! Mat2 .C/;
d 7! i!N " ` J d Ci ."; /d Cj ."; /J dJ : (10.3.3)
10.3 Solution of homological equations: proof of Lemma 10.2.2
221
In the basis .M1 ; M2 ; M3 ; M4 / of Mat2 .C/ defined in (4.2.19) and (4.2.20) the linear operator Tij ` is represented by the following self-adjoint matrix (recall that JMl D Ml J for l D 1;2 and JMl D Ml J for l D 3;4, and JM1 D M2 , JM3 D M4 ) 1 0 i!N " ` 0 0 .i j /."; / i!N " ` .i j /."; / 0 0 C B A: @ 0 0 .i C j /."; / i!N " ` .i C j /."; / 0 0 i!N " ` (10.3.4) As a consequence, the eigenvalues of Tij ` are ˙ !N " ` C i ."; / j ."; /;
˙!N " ` C i ."; / C j ."; / :
(10.3.5)
To impose nonresonance conditions we define the sets, for 1=2 1, z j!N " ` ˙ j ."; / ˙ i ."; /j 1 ; ƒ1 ."I ; A0 / WD 2 ƒW 2 h`i
(10.3.6)
jSj 8.`; i; j / 2 Z F F; .`; i; j / ¤ .0; j; j / ;
where the constant 1 D 0 =2 (recall that 0 is fixed in (1.2.6)) and
.3=2/ 1 C 3 C jSj ;
(10.3.7)
where 1 is defined in (1.2.28). The inequalities in (10.3.6) are second-order Melnikov nonresonance concerning only a finite number of normal frequencies. Remark 10.3.1. Since j ."; / > c0 > 0, for any j 2 N, if 1 2c0 , then the 1 in (10.3.6) holds for ` D 0 and inequality j!N " ` C j ."; / C i ."; /j 2 h`i j D i , for all 2 ƒ, 2 Œ1=2; 1. In the next lemma we find a solution dN of the projected homological equation J !N " @' dN C D0 dN C JdN JD0 D ˘N J 1 ;
(10.3.8)
where here the projector ˘N applies to functions depending only on the variable ', namely X X ˘N W h.'/ D h` ei`' 7! .˘N h/.'/ WD h` ei`' : (10.3.9) `2ZjSj
j`jN
Lemma 10.3.2 (Homological equation (10.2.5)). Let be a self-adjoint operator of z satisfying (10.2.9), and assume that the average the form (10.2.3), defined for 2 ƒ, j Œ b1 j .0/ 2 M , 8j 2 F, 8, i.e., (10.2.10) holds. Let h 3 i 3 s s s s N 2 ı1 2 1 1; ı1 2 1 C 1 :
(10.3.10)
222
10 Splitting between low-high normal subspaces
Then, for all 2 ƒ1 ."I 1; A0 / (set defined in (10.3.6)) there exists dN WD dN .I '/ 2 L.HF /, dN D dN , satisfying 7
jdN jLip;s1 C1 ı18 ;
1 40
jdN jLip;sC1 ı1
j jLip;s ;
8s s1 ;
(10.3.11)
which solves the projected homological equation (10.3.8), and is an approximate solution of the homological equation (10.2.5) in the sense that jJ !N " @' dN C D0 dN C JdN JD0 J 1 jLip;s1 ı17=4 :
(10.3.12)
At last, denoting by dN0 the operator associated to D00 ; 0 as above, we have, for all 2 ƒ1 ."I 1; A00 / \ ƒ1 ."I 1; A0 /, 7
1 40
jdN0 dN js1 ı18 jD00 D0 js1 C ı1 and
j 10 1 js1
(10.3.13)
ˇ ˇ J !N " @' d 0 C D 0 d 0 C Jd 0 JD 0 J 0 N 0 N N 0 1 ˇ J !N " @' dN C D0 dN C JdN JD0 J 1 ˇs
1
j 10 1 js1 :
(10.3.14)
Proof. We split the proof into steps. Step 1. For any .`; i; j / 2 ZjSj FF, for all 2 ƒ1 ."I 1; A0 /, there exists a solution j b d .`/ of the equation (10.3.2), satisfying the reality condition (10.3.1) and i
j b d i .`/ C h`i Œb
1 ji .`/ ;
(10.3.15)
where k k denotes some norm in Mat2 .C/. Recalling (10.3.5) and the definition of ƒ1 ."I 1; A0 / in (10.3.6), we have that for all 2 ƒ1 ."I 1; A0 /, the operators Tij ` defined in (10.3.3) are isomorphisms of Mat2 .C/ for ` ¤ 0 or i ¤ j , and 1 T C h`i ; (10.3.16) ij ` where k k denotes some norm in L.Mat2 .C// and C WD C. 1 /. Hence, in this case, j there exists a unique solution b d .`/ D T 1 J Œb
1 j .`/ of the equation (10.3.2). i
ij `
i
Let us consider the case ` D 0 and i D j . By (10.3.4), the linear operator Tjj 0 is represented, in the basis .M1 ; M2 ; M3 ; M4 / of Mat2 .C/, by the matrix
02 0 2j ."; / : 0 Id2
10.3 Solution of homological equations: proof of Lemma 10.2.2
223
Moreover there is c0 > 0 such that j ."; / > c0 for any j 2 N, " 2 .0; "0 /, 2 ƒ. Hence the range of Tjj 0 , is the subspace M D Span.M3 ; M4 / of Mat2 .C/ defined in (4.2.18), (4.2.20), and 8z
2 M ; 9 Š dz 2 M solving Tjj 0 dz D z with dz . z I with a little abuse of notation, we denote dz by Tjj10 .z
/. By the assumption Œ b1 jj .0/ 2 M and the fact that M is stable under the action of J , we have that J Œ b1 jj .0/ is in j d j .0/ 2 M of the equation (10.3.2). M and there is a unique solution b j To conclude the proof of Step 1, it remains to show that the matrices b d i .`/ 2 Mat2 .C/ satisfy the reality condition (10.3.1). It is a consequence of Tij.`/ D Tij ` , that Œb
1 ji .`/ satisfies the reality condition (10.2.7) and the uniqueness of the solutions j d .`// D J Œb
1 j .`/ (with the condition d j .`/ 2 M if ` D 0 and i D j ). of Tij ` .b i
i
i
Step 2. Definition of dN and proof that dN D dN . We define dN WD ˘N d.'/, where ˘N is the projector defined in (10.3.9) and X j j b d ji .`/ D Tij1 d.'/ D dij .'/ i;j 2F ; dij .'/ D d i .`/ei`' ; b
1 i .`/ ; ` J b `2ZjSj
(10.3.17) is the unique solution of (10.2.5). Note that taking the adjoint equation of (10.2.5), multiplying by J on the left and the right, since V0 and D0 are self-adjoint, the operators D0 and J commute, J D J , and using that 1 is self-adjoint, we obtain J !N " @' d C D0 d C Jd JD0 D J 1 : By uniqueness d D d is the unique solution of (10.2.5). Then also dN D dN . Step 3. Proof of (10.3.11). By (10.3.15) (and (4.3.44) and (5.1.13)) we have jdN jsC1 . N C1 j 1 js :
(10.3.18)
We also claim that we have the estimate jdN jlip;sC1 . N 2 C1 j 1 jLip;s ;
(10.3.19)
which implies, together with (10.3.18), jdN jLip;sC1 . N 2 C1 j 1 jLip;s :
(10.3.20)
To prove (10.3.19) notice that, by (10.3.16), 1 T .1 / T 1 .2 / D T 1 .1 / Tij ` .2 / Ti;j;`.1 / T 1 .2 / ij ` ij ` ij ` ij ` ˇ ˇ . h`i2 Tij ` .2 / Tij ` .1 / . h`i2 "2 ˇ2 1 ˇ (10.3.21)
224
10 Splitting between low-high normal subspaces
recalling the definition of Tij ` in (10.3.3) and (9.1.7). By (10.3.16) and (10.3.21) we j j estimate the lip seminorm of di .`/ D Tij1
1 j .`/ as ` Œb j j j C h`i Œb d .`/ . h`i2 "2 Œb .`/ .`/
1 1 i j j lip lip and (10.3.19) follows. The estimates (10.3.20), (10.3.10), and (9.2.1)-(i) finally imply 1 40
jdN jLip;sC1 CN 2 C1 j 1 jLip;s ı1
j 1 jLip;s ;
which gives the second inequality in (10.3.11). By (10.2.9) we deduce 1 40
jdN jLip;s1 C1 ı1
9
7
ı110 D ı18 ;
which is the first inequality in (10.3.11). Step 4. Proof of (10.3.12). By the definition of dN we have J !N " @' dN C D0 dN C JdN JD0 J 1 D ˘N? J 1 :
(10.3.22)
Then, recalling (4.3.44), jJ !N " @' dN C D0 dN C JdN JD0 J 1 jLip;s1
j˘N? J 1 jLip;s1
D (5.1.13);(10.2.9)
11 10
N .s2 s1 / ı1
.s2
(10.3.10)
.s2
11 10
ı13 ı1
ı17=4 :
Step 5. Proof of (10.3.13) and (10.3.14). We denote by Tij0 ` the linear operator as in (10.3.3) associated to A00 , i.e., with 0j ."; / instead of j ."; /. Arguing as for (10.3.21) we get that, if 2 ƒ1 ."I 1; A0 / \ ƒ1 ."I 1; A00 /, then ˇ ˇ 1 T .T 0 /1 C h`i2 max ˇj ."; / 0 ."; /ˇ : (10.3.23) j ij ` ij ` j 2F
Let us denote dN WD dN;D0 ; and dN0 WD dN;D00 ;0 , where we highlight the dependence of dN with respect to D0 and . Using that dN depends linearly on , actually only on 1 , see (10.3.17), we get jdN;D0 ; dN;D00 ;0 js1
jdN;D0 ; dN;D00 ; js1 C jdN;D00 ;0 js1
(10.3.23);(10.3.18)
.
N 2 j 1 js1 max jj ."; / 0j ."; /j j 2F
CN
C1
j 1 10 js1
and therefore, by (10.2.9), (10.3.10), (9.2.1)-(i), and j 1 jC;s ' j 1 js we conclude that 1 40
jdN;D0 ; dN;D00 ;0 js1 ı1
4
9
1 40
ı110 jD0 D00 js1 C ı1 1 40
ı15 jD0 D00 js1 C ı1
j 1 10 js1
j 1 10 js1
proving (10.3.13). Finally (10.3.14) is an immediate consequence of (10.3.22).
10.3 Solution of homological equations: proof of Lemma 10.2.2
225
We now prove measure estimates for the sets ƒ1 ."I ; A0 / defined in (10.3.6). z " for all 1=2 1. Lemma 10.3.3. jŒƒ1 ."I ; A0 /c \ ƒj Proof. For ` 2 ZjSj , 2 Œ1=2; 1, we define the set 1 z j!N " ` ˙ j ."; / ˙ i ."; /j 1 ; ƒ` ."I ; A0 / WD 2 ƒW 2 h`i
8.i; j / 2 F F with j`j C ji j j ¤ 0 ;
(10.3.24)
where 1 D 0 =2 and > 1 , see (10.3.7). By the unperturbed second-order Melnikov nonresonance conditions (1.2.16) and (1.2.17), the Diophantine property (1.2.29), since !N " D N C "2 (see (1.2.25)) and j ."; / D j C O."2 / (see (9.1.4)), we have "2 h`i0 C1 01 c small enough Hence, for " small enough, \ ƒ1` ."I ; A0 / D ƒ1 ."I ; A0 / D `2ZjSj
H)
z: ƒ1` ."I 1=2; A0/ D ƒ
\
ƒ1` ."I ; A0 / ;
(10.3.25)
j`j>c1 "2=0
where c1 WD .c 0 /1=0 . Now, using (9.1.5) and (9.1.6), we deduce that the complementary set Œƒ1` ."I ; A0 /c is included in the union of 4f2 intervals of length 1 , where f denotes the cardinality of F. Hence its measure satisfies h`i c2 "2 ˇ ˇ ˇ 1 z ˇˇ ˇŒƒ` ."I ; A0 /c \ ƒ
4 1 f2 ; h`i c2 "2
and, for any L > 0, 1=2 1, > jSj, we have X ˇ X ˇ ˇŒƒ1 ."I ; A0 /c ˇ ` j`jL
j`jL
C 4 1 f2 jSj 2 : 2 h`i c2 " L "
(10.3.26)
The lemma follows by (10.3.25) and (10.3.26) with L D c1 "2=0 and (10.3.7).
Remark 10.3.4. The measure of the set Œƒ1 ."I ; A0 /c can be made smaller than "p as " tends to 0, for any p, if we take the exponent large enough. This is analogous to the situation described in Remark 5.8.17. z \ƒ z 0 . Then, for 2 Lemma 10.3.5. Assume that jA00 A0 jC;s1 ı "3 on ƒ p Œ.1=2/ C ı; 1, we have p ˇ ˇ 0 1 ˇƒ z \ Œƒ1 ."I ; A00 /c \ ƒ1 ."I ı; A0 /ˇ ı 12 : (10.3.27)
226
10 Splitting between low-high normal subspaces
Proof. We first prove the estimate ˇ ˇ \ p jSj 1 ˇ ˇ 1 ƒ1 "I ı; A0 ˇ . min "2 ı 2 2 ; " : ˇŒƒ ."I ; A00 /c
(10.3.28)
Denoting ƒ1` ."I ; A00 / the set (10.3.24) associated to A00 , we claim that there exists c.s0 / > 0 such that p p Œƒ1` ."I ; A00 /c \ ƒ1 ."I ı; A0 / D ;; if h`i c.s0 / 1 =. ı/ : (10.3.29) Indeed, denoting by 0j ."; / 2 R the eigenvalues of D00 D …F A00 jHF , since kA00 A0 k0 C.s0 /jA00 A0 jC;s0 C.s0 /ı (see (4.3.27)), we have ˇ ˇ 0 ˇ ."; / j ."; /ˇ C.s0 /ı; 8j 2 F : (10.3.30) j If … ƒ1` ."I ; A00 / then, recalling (10.3.24), there exist i; j 2 F (with i ¤ j if ` D 0), signs i D ˙1 and j D ˙1 such that ˇ ˇ ˇ!N " ` C i 0 ."; / C j 0 ."; /ˇ < i j
1 2 h`i
and so, by (10.3.30), ˇ ˇ ˇ!N " ` C i i ."; / C j j ."; /ˇ
c0 :
(11.4.4)
Remark 11.4.1. In the sequel we suppose that is small enough, possibly depending on s3 (the aim is to obtain estimates without any multiplicative constant depending on s3 ). Since 2 .0; "/, any smallness condition for is satisfied if " is small enough, possibly depending on s3 , which is a large, but fixed, constant. For this reason, often we will not explicitly indicate such dependence. However it is useful to point out that, for " fixed, our estimates still hold for small enough. In fact, in Proposition 9.2.2, we extend some estimates on the s3 -norm to estimates on the s-norm for any s s3 , with a smallness condition on " that does not depend on s but only on s3 , while the smallness condition on depends on s.
11.4 Approximate right inverse of !N " @' J.A0 C /
265
In the sequel of this section the notation a b means that a=b ! 0 as ! 0. By (11.4.3), for small enough, N NN . By (11.4.3) and (9.2.1) and recalling that Q0 D 2. 0 C &s1 / C 3 and & D 1=10, we have 0
1
2
N Q C1 20 ; N
Q0 C1.s3 s1 /
5
N &.s3 s1 / 5 ;
1 20
N .1&/.s3 s1 / 2 ;
N .s3 s1 / .s3
1 20
3
11 4
:
(11.4.5) (11.4.6)
Moreover, by (11.4.3), (11.4.4), and (9.2.1), we have . 3 /n
ı1 2
3Q0
1
. s3 s1 20 ; ln C.s/
ŒC.s/n D en ln C.s/ e ln.3=2/ ln ln and, for n 1,
. 32 /n1
ı1
1 ln ln 1 ; ln.3=2/ ln C.s/ ln 1 ln.3=2/ ;
n 1
3Q0
1
. s3 s1 20 :
(11.4.7)
(11.4.8)
Recalling (11.3.1), we introduce the notation Un .s/ D UAn ;n .s/ D jRn jLip;C;s C j n jLip;C;s : The bounds (10.1.31) and (10.1.33), where ˛.s/ D 3& provide the estimate
(11.4.9)
s s2 , (11.4.7), and (11.4.8) s2 s1
ln C.s3 / 3Q0 3 C3& s3 s2 s2 s1 Un .s3 / .s3 2 ln 1 ln.3=2/ s3 s1 4 3/ 12 C 2˛.s 3 jR0 jLip;C;s3 C j jLip;C;s3 ı1 C1 (9.2.5)
.s3
.s3 (9.2.1)
ln C.s3 / 1 ln 3=2 3Q0 3 C3& s3 s2 1 2 s3 s1 4 s2 s1 2 ln " C1 ln C.s3 / 0 1 ln 3=2 9Q0 9Q & 4.s3 s1 / s2 s1 1 "2 C 1 2 ln 11
C.s3 / 10 :
(11.4.10)
The estimate (10.1.23) similarly provides 11
jPn˙1 jLip;C;s3 C.s3 / 10 :
(11.4.11)
Solution of (9.2.7). By (11.4.1) and (11.4.4), the smallness condition (11.3.2) is satisfied by % D n . Then Proposition 11.3.1 applies to the operator L WD J !N " @' C An C n , implying the existence of an approximate right inverse I WD IN;n , defined
266
11 Construction of approximate right inverse
for 2 ƒ."I ; A0 ; / (see (11.4.2)), satisfying (11.3.4)–(11.3.7). The operator I satisfies (11.3.5)
0
kI gzkLip;s1 .s1 N Q kz g kLip;s1
(11.4.5)
1
20 kz g kLip;s1
(11.4.12)
and 0 0 CN Q kz g kLip;s3 C C.s3 / N &.s3 s1 / C N Q Un .s3 / kz g kLip;s1 2 (11.4.5);(11.4.10) 1 1 11 20 kz g kLip;s3 C 5 C 20 10 kz g kLip;s1
kI gzkLip;s3
(11.3.6)
1
6
20 kz g kLip;s3 C 5 kz g kLip;s1 :
(11.4.13)
In addition estimates (11.3.4), (11.4.5), (11.4.6), and (11.4.10) give 2 11 1 11 gkLip;s1 4 kz k.LI Id/z g kLip;s3 C 5 C 20 10 kz g kLip;s1
11 4
3
kz g kLip;s3 C 2 kz g kLip;s1 :
(11.4.14)
Now let g 2 HS? satisfy the assumption (9.2.5) and consider the function g 0 D Pn1 .'/g introduced in (11.1.3). We define, for any in ƒ."I ; A0 ; /, the approximate solution h0 D I .Jg 0 / of the equation (11.1.7), where I is the approximate right inverse, obtained in Proposition 11.3.1, of the operator L D J !N " @' C An C n , and we consider the “approximate solution” h of the equation (11.1.1) as h WD Pn .'/h0
where h0 WD I .Jg 0/;
g 0 D Pn1 .'/g :
(11.4.15)
It means that we define the approximate right inverse of !N " @' J.A0 C / as 1 : L1 approx WD Pn .'/I J Pn .'/
(11.4.16)
We claim that the function h defined in (11.4.15) is the required solution of (9.2.7) with a remainder r satisfying (9.2.8). Lemma 11.4.2. The approximate solution h defined in (11.4.15) satisfies (9.2.6) and (9.2.9). Proof. We have khkLip;s1
(11.4.15)
D
(11.4.12)
Pn I .J P 1 g/ n Lip;s
.s1
1 20
1 P g n Lip;s
1
(10.1.21);(4.3.8)
1
(10.1.21)
.s1
.s1
1 20
I .J P 1g/ n
Lip;s1 19
kgkLip;s1 .s1 "2 20
(11.4.17)
by the assumption kgkLip;s1 "2 , see (9.2.5). Therefore khkLip;s1 "2 4=5 , for small enough (depending on s1 ), proving the first inequality in (9.2.6).
11.4 Approximate right inverse of !N " @' J.A0 C /
267
Let us now estimate khkLip;s3 . We have kg 0 kLip;s3
(11.4.15)
D
1 P g n
Lip;s3
ˇ ˇ .s3 jPn1 jLip;s3 kgkLip;s1 C ˇPn1 ˇLip;s kgkLip;s3
(4.3.8)
1
(11.4.11);(10.1.21)
.s3
11 10
1 .s3 "2 10
(9.2.5)
kgkLip;s1 C kgkLip;s3 C 1 .s3 "2 1 :
(11.4.18)
Then (11.4.13), (11.4.18), and kg 0 kLip;s1 .s1 kgkLip;s1 .s1 "2 , imply 1 21 2 20 1 6 I .Jg 0 / 5 . " C .s3 "2 20 s 3 Lip;s 3
(11.4.19)
and finally, using (4.3.8), (10.1.21), (11.4.11), (11.4.19), and kI .Jg 0 /kLip;s1 .s1 19 "2 20 (see (11.4.17)), we conclude that 3 21 11 (11.4.20) khkLip;s3 D kPn I .Jg 0/kLip;s3 .s3 "2 20 C 20 "2 10 for small enough (depending on s3 ). This proves the second inequality in (9.2.6). 0 At last, by (10.1.21) and (11.3.7), for any g 2 Hs0 CQ \ HS? , 1 1 1 L approx g Lip;s D Pn I J Pn g Lip;s .s1 J Pn g Lip;s CQ0 .s1 kgkLip;s0 CQ0 0
0
0
using that s0 C Q0 s1 . This proves (9.2.9).
We finally estimate the error r in the equation (9.2.7). Lemma 11.4.3. krkLip;s1 "2 3=2 , where r D !N " @' J.A0 C / h g. Proof. By (11.1.2), (11.4.15), and recalling that L D J !N " @' C An C n we have r D !N " @' J.A0 C / h g D Pn !N " @' J.An C n / h0 g 0 D Pn J Lh0 Jg 0 (11.4.21) D Pn J .LI Id/Jg 0 : Now, by the estimates in the proof of Lemma 11.4.2, see (11.4.18), we know that g 0 D Pn1 g satisfies 1 kg 0 kLip;s3 .s3 "2 1 C 10 .s3 "2 1 ; kg 0 kLip;s1 . "2 : (11.4.22)
268
11 Construction of approximate right inverse
Hence by (11.4.21), (4.3.8), and (10.1.21), we get krkLip;s1 .
k.LI Id/Jg 0 kLip;s1
(11.4.14)
11 4
3
kg 0 kLip;s3 C 2 kg 0 kLip;s1 7 (11.4.22) 5 3 .s3 "2 4 C 2 "2 2 .
proving the lemma. The next lemma completes the proof of Proposition 9.2.1.
Lemma 11.4.4 (Measure estimates). The sets ƒ."I ; A0 ; /, 1=2 5=6, defined in (11.4.2) satisfy properties 1–3 of Proposition 9.2.1. Proof. Property 1 follows immediately because the sets ƒ1 defined in Corollary 10.1.2, and ƒ in Proposition 11.2.1 are increasing in . For the proof of properties 2 and 3, we observe that whereas A0 is defined for z by Corollary 10.1.2 the operators An , n 1, are defined for 2 any 2 ƒ, z Hence for n 1, the set ƒ."; ; An / (1=2 1) is ƒ1 ."I 5=6; A0 ; / ƒ. considered as a subset of ƒ1 ."I 5=6; A0 ; /, and we shall apply Proposition 11.2.1 (more precisely Lemma 11.2.8) in this setting. Proof of property 2 of Proposition 9.2.1. Defining 8n 1 ƒn ."I ; A0 ; / WD ƒ1 ."I ; A0 ; /
\ \ n1
! ƒ."I C k ; Ak /
;
(11.4.23)
kD0
we write the set in (11.4.2) (recall that 0 D 0) as ƒ."I ; A0 ; / D ƒ1 ."I ; A0 ; /
\
ƒ."I ; A0 /
\
\
! ƒn ."I ; A0 ; / :
n1
(11.4.24) Then its complementary set may be decomposed as (arguing as in (10.1.54)) [ ƒ."I ; A0 ; /c D ƒ1 ."I ; A0 ; /c ƒ."I ; A0 /c ! [ [ \ c ƒ."I C n ; An / ƒn ."I ; A0 ; / : (11.4.25) n1
By property 2 of Corollary 10.1.2 we have ˇ ˇ ˇƒ1 ."I 1=2; A0 ; /c \ ƒ z ˇ b1 ."/ with
lim b1 ."/ D 0 :
"!0
(11.4.26)
11.4 Approximate right inverse of !N " @' J.A0 C /
By Lemma 11.2.8,
ˇ ˇ ˇƒ."I 1=2; A0 /c \ ƒ zˇ . ":
269
(11.4.27)
Moreover, for n 1, by (11.4.23), we have ƒn ."I ; A0 ; / \ ƒ."I C n ; An /c ƒ1 ."I 5=6; A0; / \ ƒ."I C n1 ; An1 / \ ƒ."I C n ; An /c : Then, since, for all n 1, we have jAn An1 jC;s1 ın3=4 on ƒ1 ."I 5=6; A0 ; / (see (10.1.39)) and C n1 D C n ın3=8 (see (10.1.37)), we deduce by Lemma 11.2.8 that, for any 1=2 5=6, ˇ ˇ \ ˇ ˇ ƒ."I C n ; An /c ˇ ın3˛=4 : (11.4.28) ˇƒn ."I ; A0 ; / In conclusion, by (11.4.25), (11.4.26), (11.4.27), and (11.4.28) at D 1=2, we deduce ˇ ˇ X ˇ z ˇˇ b1 ."/ C C " C ın3˛=4 ˇƒ."I 1=2; A0 ; /c \ ƒ n1
b1 ."/ C C " C cı13˛=4 b."/ with lim b."/ D 0, since ı1 D "3 . This proves property 2 of Proposition 9.2.1 for the "!0
sets ƒ."I ; A0 ; /. Proof of property 3 of Proposition 9.2.1. By (11.4.25) (with A00 ; 0 instead of A0 ; ) and (11.4.24) we deduce, for all .1=2/ C ı2=5 5=6, the inclusion \ \ z0 ƒ."I ; A00 ; 0 /c ƒ."I ı2=5 ; A0 ; / M WD ƒ ! [ [ [ (11.4.29) M1 M0 Mn ; n1
where
M1 WD ƒz0 \ Œƒ1 ."I ; A00 ; 0 /c \ ƒ1 ."I ı2=5 ; A0 ; / z 0 \ ƒ."I ; A00 /c \ ƒ."I ı2=5 ; A0 / M0 WD ƒ Mn WD ƒn ."I ; A00 ; 0 / \ ƒ."I C n ; A0n /c \ ƒ."I C n ı2=5 ; An /; for n 1 : By (9.2.3) and noting that ı1=2 ı2=5 , we deduce by (10.1.18) that, for all .1=2/ C ı2=5 5=6, (11.4.30) jM1 j ı˛=2 : Moreover, Lemma 11.2.8 implies that ˇ 0 ˇ z \ ƒ."I ; A00 /c \ ƒ."I ı1=2 ; A0 /ˇ ı˛ : jM0 j ˇƒ
(11.4.31)
270
11 Construction of approximate right inverse
For n 1, we have, by (11.4.28) (applied to .A00 ; 0 /) ˇ ˇ jMn j ˇƒn ."I ; A00 ; 0 / \ ƒ."I C n ; A0n /c ˇ ın3˛=4 :
(11.4.32)
On the other hand, assumption (9.2.3) implies (10.1.35), and so \(10.1.36) holds, i.e., 0 4=5 0 0 for any in Mn ƒ1 ."I 5=6; A0; / ƒ1 ."I 5=6; A0 ; /, jAn An jC;s1 ı and we deduce, by Lemma 11.2.8, the measure estimate ˇ ˇˇ ˇ 0 0 0 c 2=5 jMn j ˇƒ1 "I 5=6; A0 ; \ ƒ "I C n ; An \ ƒ "I C n ı ; An ˇ . ı4˛=5 :
(11.4.33)
Finally (11.4.29), (11.4.30), (11.4.31), (11.4.33), and (11.4.32) imply the measure estimate X min ı4˛=5 ; ın3˛=4 : (11.4.34) jMj ı˛=2 C ı˛ C C n1 . 32 /n1
Now, using that ın D ı1 ı Hence X
min ı
4˛ 5
4˛ 5
with ı1 D "3 , there is a constant C > 0 such that 3˛
ın4
3˛ 4
; ın
H)
n C ln ln ı1 : X
4˛ C ln ln ı1 ı 5 C
n1
3˛
n1;ın4 0, if s3 has been chosen large enough and n is small enough (depending on s3 and ), the inequality holds. In particular, since n 1 C.s1 /"2 , the inequality holds for " small enough (depending on s3 ). Step 1. Regularization. We first consider the regularized approximate torus {Mn .'/ WD .'; 0; 0/ C IM n .'/;
IM n WD ˘Nn In ;
(12.2.9)
where ˘N is the Fourier projector defined in (5.1.11) and 3 3 s s s s 3 1 3 1 1; n C1 : N n 2 n
(12.2.10)
Lemma 12.2.2. The torus {Mn .'/ WD .'; 0; 0/ C IM n .'/ satisfies IM n
Lip;s1 C2
C.s1 /"2 ;
IM n
Lip;s2 C2
";
IM n
4
Lip;s3 C C6
"2 n 5 ; (12.2.11)
4 in {Mn "2 n2 ; in {Mn Lip;s C2 "2 n 5 ; Lip;s1 C2 3 4 2 2 F .M{n / 2" n ; F .M{n / Lip;s C2C "2 n 5 ; Lip;s 1
(12.2.12) (12.2.13)
3
where > 0 can be taken arbitrarily small taking s3 s1 large enough. Proof. The first two estimates in (12.2.11) follow by (12.2.5) and (5.1.13). The third estimate in (12.2.11) follows by IM n Lip;s
(5.1.13)
3 C C6
C4
Nn
(12.2.10);(12.2.5)
for 3. C 4/=.s3 s1 / < .
kIn kLip;s3 C2
Cn
3. C4/ s3 s1
4
4
"2 n 5 "2 n 5
12.2 Nash–Moser iteration
277
Proof of (12.2.12). We have kin {Mn kLip;s1 C2
D ˘N?n In kLip;s1 C2
(5.1.13)
(12.2.9)
(12.2.10);(12.2.5)
.s3
Nn.s3 s1 / kIn kLip;s3 C2 4
n3 "2 n 5 ;
which implies the first bound in (12.2.12). Similarly (12.2.9), (5.1.13), and (12.2.5) imply the second estimate in (12.2.12). Proof of (12.2.13). Recalling the definition of the operator F in (6.1.2), and using Lemma 4.5.5, (12.2.5), and (12.2.11), we have kF .M{n /kLip;s1
kF .in /kLip;s1 C C.s1 /kin {Mn kLip;s1 C2
(12.2.8);(12.2.12)
2"2 n
proving the first inequality in (12.2.13). Finally Lemma 4.5.5 and (12.2.11) imply kF .M{n /kLip;s3 C2C
kF .i0 /kLip;s3 C2C C C.s3 /IM n Lip;s
3 C4C
(12.2.1);(12.2.11)
.s3
54
"2 C "2 n
54 2
"2 n
for " small, proving the second inequality in (12.2.13).
Step 2. Isotropic torus. To the regularized torus {Mn defined in (12.2.9) corresponds the isotropic torus in;ı defined by Proposition 7.1.1 with i D {Mn . Notice that by (12.2.11), the torus {Mn satisfies the assumption (7.1.4) required in Proposition 7.1.1. The torus in;ı is an approximate solution “essentially as good” as in (compare (12.2.8) and (12.2.16)). Lemma 12.2.3. The isotropic torus in;ı .'/ D .'; 0; 0/ C In;ı .'/ defined by Proposition 7.1.1 with i D {Mn , satisfies kin;ı {Mn kLip;s0 C2 .s0 "2 n ; 54 2
kin;ı {Mn kLip;s3 C4 "2 n and
kin;ı {Mn kLip;s1 C4 "2 n1 ; 54 2
kIn;ı kLip;s3 C4 "2 n
;
(12.2.14) ;
(12.2.15)
kF .in;ı /kLip;s0 .s0 "2 n ; kF .in;ı /kLip;s1 C2
"2 n1 ;
kF .in;ı /kLip;s3 C2
4 2 "2 n 5
(12.2.16) :
Proof. Proof of (12.2.14). Since s0 C 2 C s1 , we have by (12.2.11), IM n C.s1 /"2 : Lip;s C2C 0
278
12 Proof of the Nash–Moser Theorem
Hence by (7.1.6) (with s D s0 C 2), we have kin;ı {Mn kLip;s0 C2 .s0 F .M{n /Lip;s
.s0 F .M{n /Lip;s
(12.2.13)
.s0 "2 n ; (12.2.17) which is the first estimate in (12.2.14). Similarly, since s1 C 2 C s2 , we have by the second estimate in (12.2.11), IM n IM n Lip;s C2 " : Lip;s C4C 0 C2C
1
1
2
Hence by (7.1.6) (with s D s1 C 4), kin;ı {Mn kLip;s1 C4 .s1 kF .M{n /kLip;s1 C4C :
(12.2.18)
Now, by the interpolation inequality (4.5.10) we have kF .M{n /kLip;s1 C4C .s3 kF .M{n /kLip;s1 kF .M{n /k1 Lip;s3 ;
(12.2.19)
where
4C
4C
; 1 WD : s3 s1 s3 s1 Therefore (12.2.19), (12.2.13), and (12.2.20) imply WD 1
(12.2.20)
2 54 2 1 " n "2 n1 kF .M{n /kLip;s1 C4C .s3 "2 n
(12.2.21)
for s3 s1 large enough and n small. Inequalities (12.2.18) and (12.2.21) prove the second estimate in (12.2.14). Proof of (12.2.15). By (7.2.6) we have In;ı IM n Lip;s
3 C4
.s3 IM n Lip;s
(12.2.11)
3 C5
4 2
"2 n 5
for n small enough, proving the first inequality in (12.2.15). The second inequality in (12.2.15) follows as a consequence of the first one and the third estimate in (12.2.11). Proof of (12.2.16). Recalling the definition of the operator F in (6.1.2) and using Lemma 4.5.5, and (12.2.11), we have kF .in;ı /kLip;s kF .M{n /kLip;s C C.s/kin;ı {Mn kLip;sC2 both for s D s0 and s D s1 C 2, and therefore kF .in;ı /kLip;s0 kF .in;ı /kLip;s1 C2
(12.2.13)(12.2.14)
.s0
(12.2.21)(12.2.14)
"2 n ; "2 n1 ;
12.2 Nash–Moser iteration
279
proving the first two estimates in (12.2.16). Finally kF .in;ı /kLip;s3 C2 kF .i0 /kLip;s3 C2 C kF .in;ı / F .i0 /kLip;s3 C2 (12.2.1);Lemma 4.5.5
.s3
(12.2.15)
"2 C kIn;ı kLip;s3 C4 4 2
"2 n 5
(12.2.22)
for n small, which is the third inequality in (12.2.16).
Step 3. Symplectic diffeomorphism. We apply the symplectic change of variables Gn;ı defined in (7.1.9) with ../; yı ./; z.// D in;ı ./, which transforms the isotropic torus in;ı into (see (7.1.10)) 1 Gn;ı .in;ı .'// D .'; 0; 0/ :
(12.2.23)
It conjugates the Hamiltonian vector field XK associated to K defined in (3.2.7), with the Hamiltonian vector field (see (7.1.13)) XKn D .DGn;ı /1 XK ı Gn;ı
where
Kn WD K ı Gn;ı :
(12.2.24)
Denote by u WD .; ; w/ the symplectic coordinates induced by the diffeomorphism Gn;ı in (7.1.9). Under the symplectic map Gn;ı , the nonlinear operator F in (6.1.2) is transformed into
Fn .u.'// WD ! @' u.'/ XKn .u.'// D .DGn;ı .u.'///1F .Gn;ı .u.'/// :
(12.2.25)
By (12.2.25) and (12.2.23) (see also (7.1.15)) we have that
Fn .'; 0; 0/ D .!; 0; 0/ XKn .'; 0; 0/ D .DGn;ı .'; 0; 0//1F in;ı .'/ : (12.2.26)
Hence by (7.1.11), for s s0 , kFn .'; 0; 0/kLip;s
.s (12.2.16)
.s
kF .in;ı /kLip;s C IM n Lip;sC2 kF .in;ı /kLip;s0 kF .in;ı /kLip;s C IM n Lip;sC2 "2 n :
(12.2.27)
By (12.2.27), (12.2.16), and (12.2.11), we have kFn .'; 0; 0/kLip;s0 .s0 "2 n kFn .'; 0; 0/kLip;s1 C2
"2 n12
kFn .'; 0; 0/kLip;s3 C2
4 3 "2 n 5
(12.2.28) :
Step 4. Approximate solution of the linear equation associated to a Nash–Moser step in new coordinates. In order to look for a better approximate zero
unC1 .'/ D .'; 0; 0/ C hnC1 .'/
(12.2.29)
280
12 Proof of the Nash–Moser Theorem
of the operator Fn .u/ defined in (12.2.25), we expand Fn .unC1 / D Fn .'; 0; 0/ C ! @' du XKn .'; 0; 0/ hnC1 C Qn .hnC1 / ; (12.2.30) where Qn .hnC1 / is a quadratic remainder, and we want to solve (approximately) the linear equation Fn .'; 0; 0/ C ! @' du XKn .'; 0; 0/ hnC1 D 0 : (12.2.31) This is the goal of the present step. Notice that the operator ! @' du XKn .'; 0; 0/ is provided by (7.1.21), and we decompose it as ! @' du XKn .'; 0; 0/ D Dn C RZn ;
(12.2.32)
where Dn is the operator in (7.1.22) with i replaced by {Mn , i.e., 0 1 0 1 .n/ .n/ b ! @' b ŒK11 > .'/b w K20 .'/b B C Dn @ b A ! @'b A WD @ .n/ .n/ b w b L! w b J K11 .'/ .n/
(12.2.33)
.n/
with L.n/ WD L! .M{n / (see (7.1.23)), the functions K20 .'/, K11 .'/ are the Taylor ! coefficients in (7.1.14) of the Hamiltonian Kn , and 1 0 0 1 b b @ K.n/ 10 .'/Œ B .n/ .n/ >b > C : (12.2.34) RZn @ b C Œ@ K.n/ bA A WD @@ K00 .'/Œb 10 .'/ C Œ@ K01 .'/ w b w b .'/Œ g J f@ K.n/ 01 In the next proposition we define hnC1 as an approximate solution of the linear equation Fn .'; 0; 0/ C Dn hnC1 D 0. Proposition 12.2.4 (Approximate right inverse of Dn ). For all in the set ƒnC1 WD ƒn \ƒ "I n ; IM n ;
1 WD 1=2;
1
4 n WD n1 Cn1 ; n 2 ; (12.2.35)
where ƒ."I ; I/ is defined in Proposition 12.1.1, there exists hnC1 satisfying 4
khnC1 kLip;s1 C2 .s1 n C "2 n5 such that
5
;
11 2
khnC1 kLip;s3 C2 "2 n 10
rnC1 WD Fn .'; 0; 0/ C Dn hnC1
;
(12.2.36) (12.2.37)
satisfies 3
krnC1 kLip;s1 "2 n2
6
:
(12.2.38)
12.2 Nash–Moser iteration
281
Proof. Recalling (12.2.33) we look for an approximate solution hnC1 D .b ; b ; w b/ of 1 0 h i 0 1 > 0 1 .n/ .n/ b b b f.n/ wC 1 B! @' K20 .'/ K11 .'/b B C D @f.n/ C Dn @ b ; (12.2.39) b A D B 2 A A @ ! @' .n/ .n/ w b f3 L.n/ b J K11 .'/b ! w .n/
.n/
.n/
where .f1 ; f2 ; f3 / WD Fn .'; 0; 0/. Since XKn is a reversible vector field, its .n/ .n/ .n/ components f1 , f2 , f3 satisfy the reversibility property .n/ .n/ .n/ .n/ .n/ f.n/ .'/ D f .'/ ; f .'/ D f .'/ ; f .'/ D S f .'/ : 1 1 2 2 3 3 (12.2.40) We solve approximately (12.2.39) in a triangular way. D Step 1. Approximate solution of the second equation in (12.2.39), i.e., ! @'b .n/ f2 . We solve the equation D ˘Nn f.n/ ! @'b 2 .'/
(12.2.41)
where Nn is defined in (12.2.10) and the projector ˘Nn applies to functions depending only on the variable ' as in (10.3.9). Notice that ! is a Diophantine vector by (7.2.1) for all 2 ƒ, and that, by (12.2.40), the '-average of ˘Nn f.n/ 2 is zero. The solutions of (12.2.41) are h i h i .n/ b b Db 0 C b ; WD .! @' /1 ˘Nn f2 ; b 0 2 RjSj ; (12.2.42) where .! @' /1 is defined in (1.3.3) and b 0 is a free parameter that we fix below in (12.2.54). By (12.2.42), (7.2.1), and (3.3.7), we have h i (5.1.13) b 1 .n/ . ˘Nn f.n/ . N f (12.2.43) n 2 2 Lip;s
Lip;sC1
Lip;s
( 2 D 0 =4 is considered as a fixed constant). Therefore, using (12.2.43), (12.2.28), and (12.2.10), and taking s3 s1 large enough, we have for n small enough (depending on s3 ) h i h i 4 4 b "2 n13 ; b "2 n 5 : (12.2.44) Lip;s1 C2
Lip;s3 C2
Step 2. Approximate solution of the third equation in (12.2.39), i.e., h i .n/ .n/ .n/ 0 C J K11 b : L.n/ b D f3 C J K11 b ! w
(12.2.45)
3
For some 2 .0; " 2 / such that IM n
Lip;s3 C4
9
" 10
(12.2.46)
282
12 Proof of the Nash–Moser Theorem
and which will be chosen later in (12.2.58), we consider the operator L1 appr; defined in Proposition 12.1.1 (with I Ý IM n ). Notice that, by (12.2.11), the assumption kIM n kLip;s2 C2 " required in Proposition 12.1.1 with I Ý IM n also holds. Moreover L1 b appr; satisfies (12.1.5) (independently of ). We define the approximate solution w of (12.2.45), h i .n/ .n/ b .n/b w b WD L1 C L1 f C J K (12.2.47) appr; appr; J K11 0 : 3 11 Step 3. Approximate solution of the first equation in (12.2.39). With (12.2.47), the first equation in (12.2.39) can be written as 0 C gn ; ! @' b D Tnb where
(12.2.48)
i> h .n/ .n/ .n/ L1 Tn WD K20 C K11 appr J K11 ; h i h i> h i .n/ b .n/ .n/ .n/ b L1 gn WD f.n/ : appr f3 C J K11 1 C K20 C K11
(12.2.49)
We have to choose the constant b 0 2 RjSj such that the right-hand side in (12.2.48) has zero average. By (7.1.17), the matrix Z D E jSj K.n/ K.n/ WD .2/ 20 20 .'/d' TjSj
is close to the invertible twist matrix "2 A (see (1.2.12)) and therefore by (1.2.12) there is a constant C such that D .n/ E1 K C "2 : (12.2.50) 20 By (12.2.49) the operator hTn iW 0 2 RjSj 7! hTn 0 i 2 RjSj is decomposed as E h i> D .n/ .n/ .n/ (12.2.51) J K hTn i WD K20 C K11 L1 appr 11 : Now, for 0 2 RjSj , ˇh ˇ ˇ ˇ .n/ i> 1 .n/ ˇ ˇ K ˇ 11 Lappr J K11 0 ˇ
h .n/ i> 1 .n/ K11 Lappr J K11 0 Lip Lip;s0 (7.1.20);(12.2.11) .n/ "2 L1 appr J K11 0 Lip;s0 (12.1.5) .n/ C.s1 /"2 K11 0 Lip;s0 CQ0 (7.1.19) C.s1 /"4 j0 jLip C IM n
Lip;s0
(12.2.11)
C.s1 /"4 j0 jLip
j j C CQ0 0 Lip (12.2.52)
283
12.2 Nash–Moser iteration
using the fact that s0 C C Q0 s1 . By (12.2.51), (12.2.50), and (12.2.52), we deduce that, for " small enough, hTn i is invertible and hTn i1 2C "2 : (12.2.53) Thus, in view of (12.2.48), we define b 0 WD hTn i1 hgn i 2 RjSj :
(12.2.54)
We now estimate the function gn defined in (12.2.49). By (7.1.18), (7.1.20), (12.2.11), and (12.1.5), and using that s1 > s0 C Q0 C , h i b kgn kLip;s0 f.n/ C K.n/ 1 20 Lip;s0 Lip;s0 h h i .n/ i> 1 .n/ .n/ b C K11 Lappr f3 C J K11 .s0
h i .n/ C "2 b f1 Lip;s0 Lip;s0 h i .n/ .n/ b C "2 f3 C K 11 0 Lip;s0 CQ
Lip;s0
Lip;s0 CQ0
:
Hence by (12.2.44), (7.1.19), and (12.2.11), 4 13 2 .n/ kgn kLip;s0 .s0 f.n/ C " C " f3 Lip;s n 1 Lip;s
0 CQ
0
0
C "4 n13
.s1 "2 n C "4 n13 C "2 "2 n12 C "4 n13
(12.2.28)
.s1 "2 n C "4 n13 :
(12.2.55)
Then the constant b 0 defined in (12.2.54) satisfies, by (12.2.53) and (12.2.55), ˇ ˇ ˇb ˇ (12.2.56) ˇ 0 ˇ .s1 n C "2 n13 : Lip
Using (12.2.28), (7.1.19), (12.2.56), (12.2.44), and (12.2.11), we have h i .n/ .n/ b b C J K.n/ C J K "2 n13 ; f3 0 11 11 Lip;s1 Lip;s1 Lip;s1 (12.2.57) h i 4 5 .n/ .n/ .n/ 0 C J K11 b C J K11 b "2 n 5 f3 Lip;s3 C2
Lip;s3 C2
Lip;s3 C2
for n small enough. We now choose the constant in (12.2.46) as WD n13
(12.2.58)
284
12 Proof of the Nash–Moser Theorem
so that 2 .0; "3=2 / and, by the third inequality in (12.2.11), condition (12.2.46) is satisfied, provided is chosen small enough (and hence s3 is large enough). We apply .n/b .n/ b M Proposition 12.1.1 with g D f.n/ 3 C J K11 0 C J K11 Œ (and I Ý In ) noting that (12.2.57) implies (12.1.2) with defined in (12.2.58), again provided is chosen small enough. As a consequence, by (12.1.3) the function w b defined in (12.2.47) satisfies (for n small enough) 4
kb w kLip;s1 C.s1 /"2 n5 kb w kLip;s3 C2
.13/
4
"2 n5
11 .13/ C.s3 /"2 n 10
3
;
11 C3 "2 n 10
(12.2.59) :
By (12.2.59) and interpolation inequality (4.5.10), arguing as above (12.2.21), we obtain 4 4 kb w kLip;s1 C2 "2 n5 ; (12.2.60) for s3 large enough and n small enough (depending on s3 ). Moreover, by (12.1.4), h i .n/ .n/b .n/ b L.n/ w b f C J K C J K (12.2.61) D rnC1 ; ! 3 11 0 11 where rnC1 satisfies 3
krnC1 kLip;s1 C.s1 /"2 n2
.13/
3
"2 n2
5
:
(12.2.62)
0 C gn has zero mean value, Now, since b 0 has been chosen in (12.2.54) so that Tnb the equation (12.2.48) has the approximate solution b 0 C gn : (12.2.63) WD .! @' /1 ˘Nn Tnb Recalling the definition of gn and Tn in (12.2.49) we have h i> .n/b .n/ 0 D f.n/ w b; gn C Tnb 1 C K20 C K11
(12.2.64)
where w b is defined in (12.2.47). By (12.2.64) we estimate 0 gn C Tnb
(7.1.18);(7.1.20);(12.2.11)
Lip;s1
.n/ 2 b f C C.s /" 1 1 Lip;s1
Lip;s1
(12.2.28);(12.2.44);(12.2.56);(12.2.59)
"2 n12
2
C C.s1 /" 4
"2 n5
3
C kb w kLip;s1
n C
"2 n13
2
4 5 3
C " n
(12.2.65)
12.2 Nash–Moser iteration
285
and, similarly, by (7.1.18), (7.1.20),(12.2.11), (12.2.28), (12.2.44), (12.2.56), and (12.2.59), we get .n/ gn C Tnb 0 Lip;s f1 Lip;s C C.s3 /"2 kb kLip;s3 C kb w kLip;s3 3 3 kLip;s0 C kb w kLip;s0 C C.s3 /"2 IM n Lip;s C kb 3
4 3 "2 n 5
C
11 C3 C.s3 /"2 n 10
4
C C.s3 /"2 n 5
11
"2 n 10 :
4
n5
3
(12.2.66)
By (7.2.1), (3.3.7), (5.1.13), (12.2.10), (12.2.65), and (12.2.66), the function b satisfies, for s3 s1 large enough 4 11 b b 2 5 4 " ; "2 n 10 : (12.2.67) n Lip;s3 C2
Lip;s1
Using again the interpolation inequality (4.5.10), we deduce by (12.2.67), the estimate 4 5 b "2 n5 ; (12.2.68) Lip;s1 C2
for s3 large enough and n small enough. Moreover the remainder 0 C gn WD ˘N?n Tnb r.nC1/ 3 satisfies .nC1/ r3
Lip;s1
(5.1.13)
0 C gn Nn.s3 s1 / Tnb
(12.2.10);(12.2.66)
Lip;s3
(12.2.69)
3
"2 n2 : (12.2.70)
; b ; w b/ defined in (12.2.63), (12.2.42), Step 4. Conclusion. We set hnC1 WD .b (12.2.54), and (12.2.47). The estimates in (12.2.36) follow by (12.2.67), (12.2.68), (12.2.44), (12.2.56), (12.2.59), (12.2.60), and recalling that n 1 D C.s1 /"2 . Finally, by (12.2.39), (12.2.42), (12.2.54), and (12.2.61), we have that 0 1 0 .n/ 1 0 .nC1/ 1 b f r3 B 1.n/ C @ A; Dn b (12.2.71) A @f2 A D @˘N?n f.n/ 2 .n/ w b rnC1 f3 is defined in (12.2.69) and rnC1 in (12.2.61). The estimate (12.2.38) where r.nC1/ 3 follows by (12.2.71), (12.2.70), the bound 3 (5.1.13) (12.2.10);(12.2.28) ? .n/ Nn.s3 s1 / f.n/ "2 n2 ; ˘Nn f2 2 Lip;s1
and (12.2.62).
Lip;s3
286
12 Proof of the Nash–Moser Theorem
The function hnC1 defined in Proposition 12.2.4 is an approximate solution of equation (12.2.31), according to the following corollary. Corollary 12.2.5. The function
r0nC1 WD Fn .'; 0; 0/ C ! @' du XKn .'; 0; 0/ hnC1
satisfies
3 0 2 2 8 r : nC1 Lip;s " n
(12.2.72) (12.2.73)
1
Proof. The term r0nC1 in (12.2.72) is, by (12.2.32) and (12.2.37),
r0nC1 D rnC1 C RZn hnC1 :
(12.2.74)
By the expression of RZn in (12.2.34), using the tame estimate (4.5.1) for the products of functions and (7.2.13), we get .n/ .n/ RZ hnC1 . K ! C K s n 1 10 01 Lip;s1 Lip;s1 C1 Lip;s1 C1 .n/ khnC1 kLip;s1 C @ K00 Lip;s1 C1
(7.1.16);(12.2.11)
.s1
kFn .'; 0; 0/kLip;s1 C1C khnC1 kLip;s1
(12.2.75)
having used that s1 C 1 C s2 and the estimate kIM n kLip;s2 " in (12.2.11). Now, by (12.2.28) and the interpolation inequality (4.5.10), we get, for s3 s1 large enough, kFn .'; 0; 0/kLip;s1 C1C "2 n13 :
(12.2.76)
So we derive, by (12.2.75), (12.2.76), and (12.2.36), the estimate 9
kRZn hnC1 kLip;s1 "2 n5
8
:
In conclusion (12.2.74), (12.2.38), and (12.2.77) imply (12.2.73).
(12.2.77)
Step 5. Approximate solution inC1 . Finally, for all in the set ƒnC1 introduced in (12.2.35), we define the new approximate solution of the Nash–Moser iteration in the original coordinates as inC1 WD in;ı C hnC1 ;
hnC1 WD DGn;ı .'; 0; 0/hnC1 ;
(12.2.78)
where in;ı is the isotropic torus defined in Lemma 12.2.3 and hnC1 is the function defined in Proposition 12.2.4. By (7.1.11) and (12.2.11) coupled with the inequality s1 C 4 s2 C 2 and (12.2.36), we have 4
5
khnC1 kLip;s1 C2 .s1 khnC1 kLip;s1 C2 .s1 n C "2 n5 ; (12.2.79) 11 3 khnC1 kLip;s3 C2 .s3 khnC1 kLip;s3 C2 C IM n Lip;s C4 khnC1 kLip;s0 "2 n 10 3 (12.2.80) for n small enough.
12.2 Nash–Moser iteration
287
Lemma 12.2.6. The term %nC1 WD F .in;ı / C di F .in;ı /hnC1
(12.2.81)
satisfies 3
k%nC1 kLip;s1 "2 n2
9
:
(12.2.82)
Proof. Differentiating the identity (see (12.2.25)) DGn;ı .u.'//Fn.u.'// D F .Gn;ı .u.'///
(12.2.83)
we obtain D 2 Gn;ı .u.'//Œh; Fn.u.'// C DGn;ı .u.'//du Fn .u.'//Œh D di F .Gn;ı .u.'///DGn;ı .u.'//Œh : For u.'/ D .'; 0; 0/ and h D hnC1 this gives, recalling (12.2.23) and (12.2.78), D 2 Gn;ı .'; 0; 0/ŒhnC1; Fn .'; 0; 0/ C DGn;ı .'; 0; 0/duFn .'; 0; 0/ŒhnC1 (12.2.84) D di F .in;ı .'//hnC1 : By (12.2.83) we have F .in;ı / D DGn;ı .'; 0; 0/Fn.'; 0; 0/ and therefore, by (12.2.84),
F .in;ı / C di F .in;ı /hnC1
D DGn;ı .'; 0; 0/ Fn .'; 0; 0/ C du Fn .'; 0; 0/ŒhnC1 C D 2 Gn;ı .'; 0; 0/ŒhnC1; Fn .'; 0; 0/
(12.2.72);(12.2.30)
D
DGn;ı .'; 0; 0/r0nC1 C D 2 Gn;ı .'; 0; 0/ŒhnC1; Fn .'; 0; 0/ :
In conclusion, the term %nC1 in (12.2.81) satisfies k%nC1 kLip;s1 kDGn;ı .'; 0; 0/r0nC1kLip;s1 C D 2 Gn;ı .'; 0; 0/ŒhnC1; Fn .'; 0; 0/ (7.1.11);(7.1.12)
.s1
Lip;s1
1 C IM n Lip;s C2 kr0nC1 kLip;s1 1 C 1 C IM n Lip;s C3 khnC1 kLip;s1 kFn .'; 0; 0/kLip;s1 1
(12.2.11)
.s1 kr0nC1 kLip;s1 C khnC1 kLip;s1 kFn .'; 0; 0/kLip;s1 ;
(12.2.85)
where we use that s1 C 3 s2 C 2. Hence by (12.2.85), (12.2.73), (12.2.36), and (12.2.28), we get
3 4 2 2 8 2 5 5 k%nC1 kLip;s1 .s1 " n "2 n12 ; C n C " n which gives (12.2.82).
288
12 Proof of the Nash–Moser Theorem 3
Lemma 12.2.7. kF .inC1 /kLip;s1 "2 n2
10
.
Proof. By (12.2.78) we have
F .inC1 / D F .in;ı ChnC1 / D F .in;ı /Cdi F .in;ı /hnC1 CQ.in;ı ; hnC1 / (12.2.86) and, by the form of F in (6.1.2), Q.in;ı ; hnC1 / D Fz .in;ı C hnC1 / Fz .in;ı / di Fz .in;ı /hnC1 Z 1 D .1 / Di2 Fz .in;ı C hnC1 /ŒhnC1 ; hnC1 d ; 0
where Fz is the “nonlinear part” of F defined as 1 0 "2 .@y R/.i.'/; / Fz .i / WD @ "2 .@ R/.i.'/; / A "2 0; .rQ R/.i.'/; / and R is the Hamiltonian defined in (3.2.8) and (3.2.9). We have kQ.in;ı ; hnC1 /kLip;s1 .s1 "2 1 C kIn;ı kLip;s1 C khnC1 kLip;s1 khnC1 k2Lip;s1 (12.2.11);(12.2.14)
.s1
(12.2.79)
.s1
"2 khnC1 k2Lip;s1 8
"2 n5
10
:
(12.2.87)
The lemma follows by (12.2.86), Lemma 12.2.6 and (12.2.87).
Step 6. End of the Induction. We are now ready to prove that the approximate solution inC1 defined in (12.2.78) for all in the set ƒnC1 ƒn introduced in (12.2.35) satisfies the estimates (12.2.5)–(12.2.7) and (12.2.8) at step n C 1. By Lemma 12.2.7 and (12.2.3) we obtain 3
kF .inC1 /kLip;s1 "2 n2
10
D "2 nC1
with ? WD 10
(12.2.88)
(see the definition of (n ) in (12.2.3)). This proves (12.2.8)nC1 . Moreover we have kinC1 in kLip;s1 C2 kinC1 in;ı kLip;s1 C2 C kin;ı {Mn kLip;s1 C2 C kM{n in kLip;s1 C2 (12.2.78);(12.2.79);(12.2.14);(12.2.12)
.s1
4
n C "2 n5 4
C.s1 /n C "2 n5
6
5
C "2 n1 C "2 n2 (12.2.89)
289
12.2 Nash–Moser iteration
proving (12.2.6)nC1 since ? WD 10 . Similarly we get kinC1 {Mn kLip;s3 C2 kinC1 in;ı kLip;s3 C2 C kin;ı {Mn kLip;s3 C2 (12.2.78);(12.2.80);(12.2.15)
(12.2.3);(12.2.88)
11 4
"2 n 10
(12.2.90)
54
"2 nC1
(12.2.91)
and, by (12.2.91) and (12.2.12), 4
5 kinC1 in kLip;s3 C2 kinC1 {Mn kLip;s3 C2 C kM{n in kLip;s3 C2 "2 nC1 (12.2.92)
provided is chosen small enough. The bound (12.2.7)nC1 for kinC1 in kLip;s2 follows by interpolation: setting s2 C 2 D s1 C .1 /s3 ;
D
s3 s2 2 ; s3 s1
1 D
s2 C 2 s1 ; s3 s1
we have, using n D O."2 /, (4.5.10)
kinC1 in kLip;s2 C2 .s1 ;s3 kinC1 in kLip;s1 kinC1 in k1 Lip;s3
1 (12.2.89);(12.2.92) 4 11 2 5 6 2 10 4 max n ; " n " n .s1 ;s3 .s1 ;s3
1 1 1 3 3 11 3 3 4 10 2 2 " n " n " 2 n4
(12.2.93)
for s3 large enough. The bounds for InC1 follow by a telescoping argument. Using the first estimate in (12.2.5)1 and (12.2.6)k for all 1 k n C 1 we get kInC1 kLip;s1 C2 kI1 kLip;s1 C2 C
nC1 X
4
kIkC1 Ik kLip;s1 C2 .s1 "2 C "2 15
?
kD1
C.s1 /"2
since 1 D C.s1 /"2 . This proves the first inequality in (12.2.5)nC1 . Moreover kInC1 kLip;s3 C2
(12.2.11)(12.2.90)
IM n Lip;s
3 C2
4
"2 n 5
2"2 n 10
C kinC1 {Mn kLip;s3 C2 11 4
C "2 n 10
11 4 (12.2.3)
4
5 "2 nC1 ;
(12.2.94)
which is the third estimate in (12.2.5)nC1 . The second estimate in (12.2.5)nC1 follows similarly by (6.3.4) and (12.2.7)k , for all k n C 1. In order to complete the proof of Theorem 12.2.1 it remains to prove the measure estimates (12.2.4).
290
12 Proof of the Nash–Moser Theorem
Lemma 12.2.8. The sets ƒn defined iteratively in (12.2.35) with ƒ1 WD ƒ, satisfy (12.2.4). Proof. The estimate jƒ1 n ƒ2 j b."/ with lim b."/ D 0 follows by item 2 of "!0
Proposition 12.1.1 with ƒI D ƒIM 1 D ƒ. In order to prove (12.2.4) for n 3 notice that, by the definition of ƒnC1 (and ƒn ) in (12.2.35), we have \ \h ic ƒ "I n1 ; IM n1 ; n 2 : (12.2.95) ƒ "I n ; IM n ƒn n ƒnC1 D ƒn1 In addition, for all 2 ƒn , we have, for ˇ WD IM n IM n1 s
3 , 4
D ˘Nn In ˘Nn IM n1 s
(12.2.9)
1 C2
1 C2
k˘Nn .In In1 /ks1 C2 C ˘Nn In1 IM n1 s
1 C2
(5.1.12)
Nn2 kin in1 ks1 C kin1 {Mn1 ks1 C2
6 4 (12.2.6);(12.2.12) s s 2 5 ? 2 3 1 C.s1 /n1 C " n1 C "2 n1 n
3
ˇ n1 "2;
8n 2 ;
(12.2.96) 1
2
ˇ
4 5 since n1 C.s1 /"2 . Now by (12.2.35), we have n n1 D n1 n1 . Hence the estimates (12.1.1) and (12.2.96) imply that the set of (12.2.95) satisfies the measure estimate
ˇ ˛=3 ˛=4 jƒn n ƒnC1 j n1 n1 ;
which proves (12.2.4). Proof of Theorem 6.1.2. The torus embedding i1 WD i0 C .i1 i0 / C
X
.in in1 /
n2
is defined for all in C1 WD \n1 ƒn , and, by (6.3.4), (12.2.6), and (12.2.7), it is convergent in k kLip;s1 and k kLip;s2 -norms with ki1 i0 kLip;s1 C.s1 /"2 ;
ki1 i0 kLip;s2 " ;
proving (6.1.9). By (12.2.8) we deduce that 8 2 C1 D \n1 ƒn ;
F .I i1 .// D 0 :
Finally, by (12.2.4) we deduce (6.1.8), i.e., that C1 is a set of asymptotically full measure. Remark 12.2.9. The previous result holds if the nonlinearity g.x; u/ and the potential V .x/ in (1.1.1) are of class C q for some q large enough, depending on s3 .
12.3 C 1 solutions
291
12.3 C 1 solutions In this section we prove the last statement of Theorem 6.1.2 about C 1 solutions. By Proposition 9.2.2 and a simple modification of the proof of Proposition 12.1.1, which substitutes any s s3 to s3 (see Remark 12.1.2), we obtain the following result. Proposition 12.3.1. We use the same assumptions as in Proposition 12.1.1. From (12.1.1), the conclusion can be modified in the following way: for any s s3 , there is z.s/ such that the following holds. 3 9 For any 2 .0; " 2 /\.0; z.s// such that kIkLip;sC4 " 10 , there exists a linear 1 s ? operator L1 appr WD Lappr;;s such that, for any function gW ƒI ! H \ HS satisfying kgkLip;s1 "2 ;
9
kgkLip;s "2 10 ;
(12.3.1)
s ? the function h WD L1 appr g, hW ƒ."I 5=6; I/ ! H \ HS satisfies 4
khkLip;s1 C.s1 /"2 5 ;
11
khkLip;sC2 C.s/"2 10 ;
(12.3.2)
and, setting L! WD L! .i/, we have 3
kL! h gkLip;s1 C.s1 /"2 2 :
(12.3.3)
Furthermore, setting Q0 WD 2. 0 C &s1 / C 3 (where & D 1=10 and 0 are given by 0 Proposition 5.1.5), for all g 2 Hs0 CQ \ HS? , 1 L g .s1 kgkLip;s0 CQ0 : (12.3.4) appr Lip;s 0
Note that in Proposition 12.3.1 the sets ƒ."I 5=6; I/ do not depend on s and are the same as in Proposition 12.1.1. Thanks to Proposition 12.3.1 we can modify the Nash–Moser scheme in order to keep along the iteration the control of higher and higher Sobolev norms. This new scheme relies on the following result, which is used in the iteration. We recall that the sequence .n / is defined in (12.2.3) and the sequence . n / in (12.2.35). Proposition 12.3.2. For any s s3 , there is z0 .s/ > 0 with the following property. Assume that for some n such that n z0 .s/, there is a map In W 7! In ./, defined for in some set ƒn , such that, kIn kLip;s1 C2 C.s1 /"2 ;
kIn kLip;s2 C2 ";
4
kIn kLip;sC2 "2 n 5 ;
and kF .in /kLip;s1 "2 n
where in .'/ D .'; 0; 0/ C In .'/ :
(12.3.5)
292
12 Proof of the Nash–Moser Theorem
Let 3 3 ss ss Nn;s 2 n 1 1; n 1 C 1
and IM n;s WD ˘Nn;s In :
(12.3.6)
Then there exists a map 7! InC1 ./, defined for 2 ƒn \ ƒ."I n ; IM n;s /, which satisfies 4
kinC1 in kLip;s1 C2 C.s1 /n C "2 n5
?
;
1
kinC1 in kLip;s2 "2 n5 ; (12.3.7)
4
5 kInC1 kLip;sC3 "2 nC1
(12.3.8)
and kF .inC1 /kLip;s1 "2 nC1 ; where ? is chosen as in Theorem 12.2.1. Proof. The proof follows exactly the steps of the inductive part in the proof of Theo3 ss
rem 12.2.1, with Nn;s n 1 and with the same small exponent . We have just to observe that in Step 1 (regularization) IM n
(5.1.13) Lip;sC C7
(12.3.6);(12.3.5)
C5
Nn;s kIn kLip;sC2
C.s/n
3.C5/ ss1
4
4
"2 n 5 "2 n 5
;
for 3. C 5/=.s3 s1 / < and n small enough (depending on s). As a result 4 2 5 IM n " ; n Lip;sC1C C6
i.e., in the third estimate of (12.2.11) in Lemma 12.2.2, s C 1 can be substituted to s3 . From this point, we use the substitution s3 Ý s C 1 in all the estimates of the induction, where s3 appears, except the second estimate of (12.2.12) and (12.2.92), where we keep s3 . Note that these last two estimates are useful only to obtain the bound (12.2.93) on the norm kinC1 in kLip;s2 C2 by an interpolation argument. All the estimates where we apply the substitution s3 Ý s C 1 hold provided that n is smaller than some possibly very small, but positive constant, depending on s. In particular, the analogue of Proposition 12.2.4 uses Proposition 12.3.1, and the second estimate of (12.2.36) is replaced by 11 2
khnC1 kLip;sC3 "2 n 10
:
To prove (12.3.8), we use (12.2.94) with the subtsitution s3 Ý s C 1. Note that in this estimate we need only a bound for kIM n kLip;sC3 , not for kIn kLip;sC3 .
12.3 C 1 solutions
293
We now consider a nondecreasing sequence .pn / of integers with the following properties: (i) p1 D 0 and lim pn D 1; n!1 0
(ii) 8n 1; n z .s3 C pn /; (iii) 8n 1; pnC1 D pn or pnC1 D pn C 1. The sequence .pn / can be defined iteratively in the following way: p1 WD 0, so that property (ii) is satisfied for n D 1, provided " is small enough (the smallness condition depending on s3 ). Once pn is defined (satisfying (ii)), we choose ( pn C 1 if nC1 z0 .s3 C pn C 1/ pnC1 WD otherwise : pn Then property (ii) is satisfied at step n C 1 in both cases, because .n / is decreasing. The sequence .pn / satisfies (i) because lim n D 0 (so that the sequence cannot be n!1 stationary) and (iii) by definition. Starting, as in Section 12.2, with the torus i1 .'/ defined in Lemma 6.3.1 for all 2 ƒ D ƒ1 and using repeatedly Proposition 12.3.2, we obtain the following theorem. Theorem 12.3.3 (Nash–Moser C 1 ). Let !N " 2 RjSj be . 1 ; 1 /-Diophantine and satisfy property .NR/1 ;1 in Definition 5.1.4 with 1 ; 1 fixed in (1.2.28). Assume (9.2.1) and s2 s1 C 2, s1 s0 C 2 C C Q0 , where is the loss of derivatives defined in Proposition 7.1.1 and Q0 WD 2. 0 C &s1 / C 3 is defined in Proposition 11.2.1. Then, for all 0 < " "0 small enough, for all n 1, there exist 1. A subset ƒn ƒn1 , ƒ1 WD ƒ0 WD ƒ, satisfying jƒ1 n ƒ2 j b."/ with ˛ jƒn1 n ƒn j n2 ;
lim b."/ D 0 ;
"!0
(12.3.9)
8n 3 ;
where ˛ D ˛=4 and ˛ > 0 is the exponent in (12.1.1), 2. A torus in .'/ D .'; 0; 0/ C In .'/, defined for all 2 ƒn , satisfying kIn kLip;s1 C.s1 /"2 ;
kIn kLip;s2 C2 "; 4
?
5 kin in1 kLip;s1 C2 C.s1 /n1 C "2 n1 ; 1 5
kin in1 kLip;s2 "n1 ; and
4
kIn kLip;s3 Cpn C2 "2 n 5 ; (12.3.10)
for n 2 ;
kF .in /kLip;s1 "2 n :
for n 2 ;
(12.3.11) (12.3.12) (12.3.13)
294
12 Proof of the Nash–Moser Theorem
As stated at the end of Section 12.2, (12.3.11) and (12.3.12) imply that the sequence .in / converges in k kLip;s1 and k kLip;s2 norms to a map i1 W 7! i1 ./, defined on
C1 D \n1 ƒn ; which satisfies 8 2 C1 ;
F .I i1 .// D 0 :
There remains to justify that kI1 kLip;s D ki1 i0 kLip;s < 1;
8s > s2 :
For a given s > s2 , let us fix n large enough, so that s
1 3 s1 C s 4 4
with
s WD s3 C pn C 2 :
(12.3.14)
Then, by (12.3.10), for all k n C 1, 4
kIk Ik1 kLip;s kIk kLip;s C kIk1 kLip;s 2"2 k 5 :
(12.3.15)
Hence, by interpolation, kIk Ik1 kLip;s
(12.3.14)
kIk Ik1 kLip; 3s1 Cs 4
3 4
(4.5.10) (12.3.11);(12.3.15)
1
4 C.s/kIk Ik1 kLip;s1 kIk Ik1 kLip;s
3
? 1
3
?
5 10 C.s/k1 k 5 C.s/k1
(12.3.16)
q using that k D k1 , with q < 3=2 (see (12.2.3)). Moreover, recalling that s WD s3 C pn C 2 (see (12.3.14)), we deduce by (12.3.10) that
kIn kLip;s < 1 :
(12.3.17)
In conclusion, (12.3.17) and (12.3.16) imply, since .3=10/ ? > 0, that X kIn kLip;s C kIk Ik1 kLip;s < 1 ; knC1
and the sequence .In /nn converges in H's H's .Hs \ HS? / to I1 D i1 i0 . Since s is arbitrary, we conclude that i1 ./ is C 1 for any 2 C1 .
Chapter 13
Genericity of the assumptions The aim of this chapter is to prove the genericity result stated in Theorem 1.2.3.
13.1 Genericity of nonresonance and nondegeneracy conditions We fix s > d=2, so that we have the compact embedding H s .Td / ,! C 0 .Td /. We denote by B.w; r/ the open ball of center w and radius r in H s .Td /. Recalling Definition 1.2.2 of C 1 -dense open sets, it is straightforward to check the following lemma. Lemma 13.1.1. The following properties hold: 1. A finite intersection of C 1 -dense open subsets of U is C 1 -dense open in U ; 2. A countable intersection of C 1 -dense open subsets of U is C 1 -dense in U ; 3. Given three open subsets W V U of H s .Td / (resp. H s .Td / H s .Td /), if W is C 1 -dense in V and V is C 1 -dense in U , then W is C 1 -dense in U . Moreover we have the following useful result. Lemma 13.1.2. Let U be a connected open subset of H s .Td / and let f W U ! R be a real analytic function. If f 6 0, then ˚ Z.f /c WD w 2 U W f .w/ ¤ 0 is a C 1 -dense open subset of U . Proof. Since f W U ! R is continuous, Z.f /c is an open subset of U . Arguing by contradiction, we assume that Z.f /c is not C 1 -dense in U . Then there are w0 2 U , s 0 s and > 0 such that: for all h 2 C 1 .Td / satisfying khkH s0 < , we have that f .w0 C h/ D 0. Let > 0 be such that the ball B.w0 ; / U . We claim that f vanishes on B.w0 ; / \ C 1 .Td /. Indeed, if h 2 C 1 .Td / satisfies khkH s < we have the segment Œw0 ; w0 C h U , and the map 'W t 7! f .w0 C th/ is real analytic on an open interval of R, which contains Œ0; 1. Moreover '.t/ vanishes on the whole interval jtj < khk1 0 . Hence ' vanishes everywhere, and f .w0 C h/ D '.1/ D 0. Hs Now, by the fact that C 1 .Td / is a dense subset of H s .Td /, and f is continuous, we conclude that f vanishes on the whole ball B.w0 ; /.
296 Let
13 Genericity of the assumptions
˚ V WD w 2 U W f vanishes on some open neighborhood of w :
From the previous argument, V is not empty, and it is by definition an open subset of U . Let us prove that it is closed in U too. Assume that the sequence .wn / H s converges to some w and satisfies: wn 2 V for all n. Then there is r > 0 such that, for n large enough, the ball B.wn ; r/ contains w and is included in U . Using the same argument as before (with wn instead of w0 ), we can conclude that f vanishes on the whole ball B.wn ; r/, hence w 2 V . Since U is connected, we finally obtain V D U , which contradicts the hypothesis f 6 0. The proof of Theorem 1.2.3 uses results and arguments provided by Kappeler and Kuksin [96] that we recall below. We first introduce some preliminary information. For a real-valued potential V 2 H s .Td /, we denote by .j .V //j 2N the sequence of the eigenvalues of the Schr¨odinger operator C V .x/, written in increasing order and counted with multiplicity 0 .V / < 1 .V / 2 .V / : These eigenvalues are Lipschitz-continuous functions of the potential, namely jj .V1 / j .V2 /j kV1 V2 kL1 .Td / . kV1 V2 kH s .Td / :
(13.1.1)
Step 1. The construction of Kappeler and Kuksin [96]. For J N, define the set of potentials V WD V .x/ in H s .Td / such that the eigenvalues j .V /, j 2 J , of C V .x/ are simple, i.e., n o EJ WD V 2 H s .Td /W j .V / is a simple eigenvalue of C V .x/; 8j 2 J : The set EŒŒ0;N will be simply denoted by EN . Since the eigenvalues j .V / of C V .x/ are simple on EJ , it turns out that each function j W EJ ! R;
j 2J;
is real analytic. Moreover the corresponding eigenfunctions ‰j WD ‰j .V /, normalized with k‰j kL2 D 1, can be locally expressed as real analytic functions of the potential V 2 EJ . By [96, Lemma 2.2], we have that, for any J N finite, 1. EJ is an open, dense and connected subset of H s .Td /, 2. H s .Td /nEJ is a real analytic variety. This implies that for all V 2 H s .Td /nEJ , there are r > 0 and real analytic functions f1 ; : : : ; fs on the open ball B.V; r/ such that .i/ B.V; r/nEJ
s [ i D1
fi1 .0/ .ii/ 8i 2 ŒŒ1; s; fi jB.V;r/ 6 0 :
(13.1.2)
13.1 Genericity of nonresonance and nondegeneracy conditions
297
Recalling Definition 1.2.2 we prove this further lemma. Lemma 13.1.3. Let P be the set defined in (1.2.34). The subset EJ \ P H s .Td / is connected and C 1 -dense open in P . Proof. We first prove that EJ \ P is C 1 -dense open in P . By (13.1.2) s \
B.V; r/nfi1 .0/ B.V; r/ \ EJ :
(13.1.3)
i D1
By Lemma 13.1.2 each set B.V; r/nfi1 .0/, i D 1; : : : ; s, is C 1 -dense open in B.V; r/ (since fi jB.V;r/ 6 0), as well as their intersection, and therefore (13.1.3) implies that B.V; r/ \ EJ is C 1 -dense in B.V; r/. Thus EJ is C 1 -dense open in H s .Td /. Finally, since P is an open subset of H s .Td /, the set EJ \ P is C 1 -dense open in P . There remains to justify that EJ \ P is connected. Let V0 ; V1 2 EJ \ P . Since EJ is an open connected subset of H s .Td /, it is arcwise connected: there is a continuous path W Œ0; 1 ! EJ such that .0/ D V0 , .1/ D V1 . Notice that, if V 2 EJ , then, for all m 2 R, the potential V C m 2 EJ . Let 0 .t/ be the smallest eigenvalue of C .t/. The map t 7! 0 .t/ is continuous. Since V0 ; V1 2 P , we have 0 .0/ > 0, 0 .1/ > 0. Choose any continuous map W Œ0; 1 !0; C1Œ such that .0/ D 0 .0/ and .1/ D 0 .1/ and define mW Œ0; 1 ! R by m.t/ WD .t/ 0.t/. Then C m is a continuous path in EJ \ P connecting V0 and V1 . In conclusion, EJ \ P is arcwise connected. We have the following result, which is Lemma 2.3 in [96], with some new estimates on the eigenfunctions, i.e., items (iii)–(iv). Lemma 13.1.4. Fix J N , J finite. There is a sequence .qn /n2N of C 1 positive potentials with the following properties: (i) 8n 2 N, the potential qn is in EJ . More precisely, for each j 2 J , the sequence of eigenvalues .n;j /n WD .j .qn //n converges to some j > 0, with j ¤ k if j; k 2 J , j ¤ k. (ii) Let ˙‰n;j be the eigenfunctions of C qn .x/ with k‰n;j kL2 D 1. For each j 2 J , the sequence .‰n;j /n ! ‰j weakly in H 1 .Td /, hence strongly in L2 .Td /, where the functions ‰j 2 L1 .Td / have disjoint essential supports. (iii) For each j 2 J , the sequence .‰n;j /n ! ‰j strongly in Lq .Td / for any q 2. (iv) For any 2 L1 .Td /, 8j; k 2 J , Z Z 2 2 lim
.x/‰n;j .x/‰n;k .x/ dx D ıkj n!1 Td
j
Td
j
.x/‰j4 .x/ dx ; j
where ık is the Kronecker delta with values ık WD 0, if k ¤ j , and ıj WD 1.
298
13 Genericity of the assumptions
Proof. Let M 2 N be such that J ŒŒ0; M . It is enough to prove the lemma for J D ŒŒ0; M . We recall the construction in Lemma 2.3 of [96]. Choose disjoint open balls Bj WD B.xj ; rj /, 0 j M , of decreasing radii r0 > > rM in such a way that, denoting by j the smallest Dirichlet eigenvalue of on Bj , 0 < < M < .2/ 0 ; where .2/ 0 is the second Dirichlet eigenvalue of on B0 . Define a sequence of C 1 positive potentials such that 8 M ˆ 0 with j ¤ k for j ¤ k : (13.1.18) Moreover, by Lemma 13.1.4-(i)-(ii), for all k 2 S, j 2 ŒŒ0; M \ Sc , the matrix elements (recall (1.2.9), (1.2.10), and that j ¤ k) k 3 2 2 B .qn ; 1/ j D 1 ; ‰ 1 (13.1.19) ‰ n;j n;k n;k ! 0 as n ! C1 : L2 2 n;j
13.1 Genericity of nonresonance and nondegeneracy conditions
303
In addition, by (1.2.9), (13.1.11), and (13.1.18), we have det A .qn ; 1/ D
Y
!2 1 n;j
n!C1
det G.qn ; 1/ !
j 2S
Y
!2 1 j
DW 1 > 0 :
j 2S
(13.1.20) Therefore .Fj;k; .qn ; 1// defined in (13.1.15) converges to 1 .j C k /, and since .j /j 2ŒŒ0;M are distinct and strictly positive, 1 .j C k / ¤ 0 for .j; k; / 2 IM . This implies (13.1.17). We have the following corollary. Corollary 13.1.14. For each potential V .x/ 2 NS;M (defined in (13.1.16)), the set n o \ a 2 H s Td W Fj;k; .V; a/ ¤ 0 GV;S;M WD .j;k;/2IM
is C 1 -dense in H s .Td /. Proof. By Lemma 13.1.13, for each potential V 2 NS;M , for each .j; k; / 2 IM , we have that Fj;k; .V; 1/ ¤ 0. Since the function a 7! Fj;k; .V; a/ is real analytic on H s .Td /, Lemma 13.1.2, and the fact that IM is finite, imply that GV;S;M is a C 1 -dense open subset of H s .Td /. Proof of Proposition 13.1.12 concluded. The set GM defined in (13.1.13) is clearly open. Moreover GM is C 1 -dense in .ES \ P / H s .Td /, by Corollary 13.1.14, and because NS;M is C 1 -dense in ES \ P . Hence GM is a C 1 -dense open subset of .ES \ P / H s .Td /. Since ES is C 1 -dense open in P , the set GM is a C 1 -dense open subset of P H s .Td /. By Corollary 13.1.10, so is GM \ G .2/ , and by (13.1.14), we deduce the last claim of Proposition 13.1.12. Step 4. Genericity of finitely many first- and second-order Melnikov conditions (1.2.7) and (1.2.16)–(1.2.19). Let L; M 2 N with S ŒŒ0; M . Consider the following Conditions: (C1) N ` C j ¤ 0; 8` 2 ZS ; j`j L; j 2 Sc , S c c (C2) `C N j k ¤ 0; 8.`; j; k/ ¤ .0; j; j / 2 Z .ŒŒ0; M \S /S ; j`j L,
(C3) N ` C j C k ¤ 0; 8.`; j; k/ 2 ZS .ŒŒ0; M \ Sc / Sc ; j`j L . Note that (C1), (C2), (C3) correspond to the Melnikov conditions (1.2.7) and (1.2.16)–(1.2.19). .2/ .3/ We denote by EL.1/ , respectively EL;M , EL;M , the set of potentials V 2 ES \ P satisfying conditions (C1), respectively (C2), (C3).
304
13 Genericity of the assumptions
Lemma 13.1.15. Let L; M 2 N. The set of potentials ˚ .2/ .3/ EL;M WD EL.1/ \ EL;M \ EL;M D V 2 ES \ P W (C1),(C2),(C3) hold (13.1.21) is a C 1 -dense open subset of ES \ P , thus of P . Proof. First note that for any potential q 2 P , the Weyl asymptotic formula about the distribution of the eigenvalues of C q.x/ implies that C1 j 1=d j C2 j 1=d ;
8j 2 N ;
(13.1.22)
for some positive constants C1 ; C2 , which is uniform on some open neighborhood Bq of q. Hence Condition (C1) above may be violated by some V 2 Bq for j d only. Similarly, Conditions (C 2) and (C 3) may be violated by some V 2 C.jjL/ N d N / only. Hence the inequalities in Conditions (C1)-(C 3) Bq for k C.M C .jjL/ above are locally finitely many so that it is enough to check that for any ` 2 ZjSj , j 2 Sc , k 2 Sc , the sets ˚ .1/ E`;j WD V 2 ES \ P W N ` C j ¤ 0 ; ˚ .2/ E`;j;k WD V 2 ES \ P W N ` C j k ¤ 0 with ` ¤ 0 or j ¤ k (13.1.23) ˚ .3/ E`;j;k WD V 2 ES \ P W N ` C j C k ¤ 0 are C 1 -dense open in ES \ P . The sets in (13.1.23) are open, since each j de.2/ pends continuously on V 2 ES \ P . We now prove the C 1 -density of E`;j;k , with .`; j; k/ ¤ .0; j; j /. Notice that the map ‡j;k W ES[fj g[fkg \ P ! RjSjC2 ;
V 7! ‡j;k .V / WD .; N j ; k / ;
is real analytic and, by Remark 13.1.8, the differential d ‡j;k .V / is onto for any V .1/ belonging to some C 1 -dense open subset NS[fj of H s .Td /. Hence the map g[fkg V 7! N ` C j k is real analytic on the connected open set ES[fj g[fkg \ P and does not vanish everywhere for .`; j; k/ ¤ .0; j; j /. Therefore Lemma 13.1.2 .2/ implies that E`;j;k is C 1 -dense open in ES[fj g[fkg \ P , hence also in ES \ P . The .1/ .3/ and E`;j;k . same arguments can be applied to E`;j
Step 5. Genericity of the Diophantine conditions (1.2.6) and (1.2.8) of the firstorder Melnikov conditions (1.2.7) and the second-order Melnikov conditions (1.2.16)–(1.2.19). Consider the set of .V; a/ defined by \ .2/ G .3/ WD NS.1/ \ P H s Td G ; (13.1.24) where NS.1/ is the set of potentials defined in (13.1.8) and G .2/ is the set of .V; a/ defined in (13.1.12) (for which the twist condition (1.2.12) holds).
13.1 Genericity of nonresonance and nondegeneracy conditions
305
Lemma 13.1.16. G .3/ is a C 1 -dense open subset of P H s .Td /. Proof. By Lemma 13.1.6 and Corollary 13.1.10.
Recall also that the map W N ES \ P ! RjSj ;
V 7! .V N / WD .j .V //j 2S
defined in (13.1.9) is real analytic, and, by Lemma 13.1.7, for any V 2 NS.1/ , the differential d .V N /jE is an isomorphism, where E is the jSj-dimensional subspace of 1 d C .T / defined in Lemma 13.1.6. We fix some .VN ; a/ N 2 G .3/ and, according to the decomposition H s Td D E ˚ F; where F WD E ?L2 \ H s Td ; we write uniquely VN D vN 1 C vN 2 ;
vN 1 2 E; vN 2 2 F ;
i.e., vN 1 is the projection of VN on E and vN 2 is the projection of VN on F . Lemma 13.1.17. (i) There are open balls B1 E (the subspace E ' RjSj ), B2 F H s .Td /, centered respectively at vN 1 2 E, vN 2 2 F , such that, for all v2 2 B2 , the map uv2 W B1 E ! RjSj ;
v1 7! uv2 .v1 / WD .v N 1 C v2 /
is a C 1 diffeomorphism from B1 onto its image .B N 1 C v2 / DW Ov2 , which is an open-bounded subset of RjSj , the closure of which is included in .0; C1/jSj . (ii) There is a constant KVN > 0 such that the inverse functions jSj !E u1 v2 ./W Ov2 R
are KVN -Lipschitz continuous. (iii) There is an open ball B H s .Td / centered at aN such that (possibly after N has reducing B1 and B2 ) the neighborhood UVN ;aN WD .B1 C B2 / B of .VN ; a/ .3/ closure contained in G , and there is a constant CVN ;aN > 0 such that (13.1.25) kakL1 1 C A 1 CVN ;aN ; 8.V; a/ 2 UVN ;aN ; where A is the Birkhoff matrix in (1.2.9) and (1.2.10). Proof. Since VN D vN 1 C vN 2 is in NS.1/ , the differential d . N vN 1 C v2 /jE is an isomorphism and the local inversion theorem implies item (i) of the lemma. Items (ii)–(iii) are straightforward taking B1 and B2 small enough.
306
13 Genericity of the assumptions
In the sequel we shall always restrict to the neighborhood UVN ;aN of .VN ; a/ N 2 G .3/ provided by Lemma 13.1.17-(iii). For any j 2 N, v2 2 B2 , we consider the C 1 function (13.1.26) „j W Ov2 RjSj ! R; ! 7! „j .!/ WD j u1 v2 .!/ C v2 : Note that for any j 2 S, it results „j .!/ D !j . Lemma 13.1.18. There is a constant KN WD KN VN > 0 such that, for any potential N v2 2 B2 F H s .Td /, each function „j defined in (13.1.26) is K-Lipschitz 0 continuous. Moreover there are constants C; C > 0, such that, for all v2 2 B2 , Cj 1=d „j .!/ C 0 j 1=d ;
8j 2 N; ! 2 Ov2 :
(13.1.27)
Proof. By (13.1.1), j D 2j , and the fact that ˛ WD
inf
inf j .V / > 0
V 2B1 CB2 j 2N
we get that 8V; V 0 2 B1 C B2 ;
ˇ ˇ ˇj .V / j .V 0 /ˇ 1 V V 0 s : H 2˛
(13.1.28)
Since, by Lemma 13.1.17 the functions u1 v2 are KVN -Lipschitz continuous, (13.1.28) KN implies that the function „j defined in (13.1.26) is V -Lipschitz continuous. The 2˛ bound (13.1.27) follows by (13.1.22). Notice that, in view of (8.1.8), the set M WD MV;a NnS associated to .V; a/ in Lemma 8.1.1 has an upper bound C.V; a/ that depends only on kA 1 k and kakL1 . Hence M can be taken constant for all .V; a/ in a neighborhood of .VN ; a/, N namely, by (13.1.25), (13.1.29) 9MN 2 N such that; 8.V; a/ 2 UVN ;aN ; MV;a [ S 0; MN : N where the constant KN is defined in Lemma Lemma 13.1.19. Fix an integer LN 4K, 13.1.18 and let MN 2 N be as in (13.1.29). Given a potential v2 2 B2 F H s .Td /, we define, for any > 0, the subset of potentials G . ; v2 / B1 E C 1 of all the v1 2 B1 such that, for V D v1 C v2 , the following Diophantine conditions hold: ; 8` 2 ZjSj nf0g I 1. jN `j h`i0 ˇ ˇ ˇ ˇ X jSj.jSjC1/ ˇ ˇ 2. ˇn C pij i j ˇ ; 8.n; p/ 2 Z Z 2 n f.0; 0/g I 0 ˇ ˇ hpi i;j 2S;i j
13.1 Genericity of nonresonance and nondegeneracy conditions
3. jN ` C j j 4.
; h`i0
307
N j 2 Sc WD N n S I 8` 2 ZjSj ; j`j L;
; 8.`; j; k/ 2 ZjSj 0; MN \ Sc Sc ; 0 h`i .`; j; k/ ¤ .0; j; j /; j`j LN ; (13.1.30) ; 8.`; j; k/ 2 ZjSj 0; MN \ Sc Sc ; jN ` C j C k j h`i0 j`j LN : (13.1.31) jN ` C j k j
Then the Lebesgue measure (on the finite-dimensional subspace E ' RjSj ) ˇ !ˇ ˇ ˇ [ ˇ ˇ G . ; v2 / ˇ D 0 : (13.1.32) ˇB1 n ˇ ˇ >0
Proof. In this lemma we denote by mLeb the Lebesgue measure in RjSj . i) Let F1; be the set of the Diophantine frequency vectors ! 2 Ov2 such that j! `j
; h`i0
8` 2 ZjSj nf0g :
(13.1.33)
It is well known that, for 0 > jSj 1, mLeb .Ov2 nF1; / D O. /. ii) Let F2; be the set of the frequency vectors ! 2 Ov2 such that ˇ ˇ ˇ ˇ X jSj.jSjC1/ ˇ ˇ pij !i !j ˇ ; 8.n; p/ 2 Z Z 2 n f.0; 0/g : ˇn C ˇ hpi0 ˇ i;j 2S;i j
(13.1.34) Arguing as in Lemma 3.3.1 we deduce that mLeb .Ov2 nF2; / D O. /, see also Lemma 6.3 in [23]. iii) Let F3; be the set of the frequency vectors ! 2 Ov2 such that j! ` C „j .!/j
; h`i0
N j 2 Sc : 8` 2 ZjSj ; j`j L;
(13.1.35)
Let us prove that mLeb .Ov2 nF3; / D O. /. Define on Ov2 the map f`;j .!/ WD ! ` C „j .!/. By (13.1.27), there is a constant C > 0 such that if j C j`jd , N continuthen f`;j 1 on Ov2 . Assume j C j`jd . Since „j is K-Lipschitz N N ous by Lemma 13.1.18, we have, for j`j L 4K, ` 3j`j ` @! .! ` C „j .!// D j`j C @! „j .!/ : j`j j`j 4
308
13 Genericity of the assumptions
N Hence, for j`j L, ( mLeb
! 2 Ov2 W 9j 2 S ; jf`;j .!/j h`i0
)!
c
Hence mLeb .Ov2 nF3; / C
X N j`jL
j`j0 C1d
h`i0 C1 C C1d : 0 h`i
C j`jd
C ;
provided that 0 > d C jSj 1. iv) Let F4; , resp. F5; , be the set of the frequency vectors ! 2 Ov2 such that j! ` C „j .!/ „k .!/j ; h`i0 8.`; j; k/ 2 ZjSj 0; MN \ Sc Sc ; .`; j; k/ ¤ .0; j; j /; j`j LN ;
(13.1.36)
respectively jN ` C „j .!/ C „k .!/j ; h`i0 8.`; j; k/ 2 ZjSj 0; MN \ Sc Sc ; j`j LN :
(13.1.37)
Define on Ov2 the map f`;j;k .!/ WD !`C„j .!/„k .!/. By (13.1.27), there is a constant C such that, if j MN and k C.j`jd C MN /, then f`;j;k 1 on N Ov2 . Moreover, since „j , „k are K-Lipschitz continuous by Lemma 13.1.18, N N we deduce that, for j`j L 4K, ` ` ` @! .! ` C „j .!/ „k .!// D j`j C @! „j .!/ @! „k .!/ j`j j`j j`j j`j : 2 N Therefore, for j`j L,
ˇ ˇ mLeb ! 2 Ov2 W 9.j; k/ 2 0; MN \ Sc Sc ; ˇf`;j;k .!/ˇ h`i0 C MN j`jd C MN h`i0 C1 C 0 MN : h`i0 C1d Hence, as in iii), mLeb .Ov2 nF4; / C , provided that 0 > d C jSj 1. We have the similar estimate mLeb .Ov2 nF5; / C .
13.1 Genericity of nonresonance and nondegeneracy conditions
309
We have proved that mLeb .Ov2 nFi; / D O. / for i D 1; : : : ; 5. We conclude that F . / WD \5iD1 Fi; satisfies mLeb .Ov2 nF . //
5 X
mLeb .Ov2 nFi; /
i D1
D O. / hence mLeb Ov2 n
[
!
F . / D 0 :
>0
Finally, since by Lemma 13.1.17-(i) the map uv2 is a diffeomorphism between B1 and Ov2 , the measure mLeb B1 nG . ; v2 ; a/ D mLeb u1 D 0: v2 Ov2 nF . / This completes the proof of the lemma.
Step 6. Conclusion: Proof of Theorem 1.2.3. For any .VN ; a/ N 2 G .3/ (see (13.1.24)), set introduced in (13.1.24), we define \ d \ s GVN ;aN WD EL; GMN UVN ;aN ; (13.1.38) N M N H T where UVN ;aN G .3/ is the neighborhood of .VN ; a/ N fixed in Lemma 13.1.17, and the N sets EL; N M N , GM N are defined respectively in (13.1.21) and (13.1.13) with integers L, MN , associated to .VN ; a/, N that are fixed in Lemma 13.1.19 and (13.1.29). Lemma 13.1.20. The set GVN ;aN is C 1 -dense open in UVN ;aN . Proof. By Lemma 13.1.15 and Proposition 13.1.12. Finally, we define the set G of Theorem 1.2.3 as [ G WD GVN ;aN ;
(13.1.39)
.VN ;a/2 N G .3/
where G .3/ is defined in (13.1.24). Lemma 13.1.21. G is a C 1 -dense open subset of P H s .Td /. Proof. Since GVN ;aN is open and C 1 -dense in UVN ;aN by Lemma 13.1.20, the set G defined in (13.1.39) is open and C 1 -dense in [ UVN ;aN D G .3/ .VN ;a/2 N G .3/
(recall that UVN ;aN G .3/ by Lemma 13.1.17). Now, by Lemma 13.1.16, G .3/ is a C 1 -dense open subset of P H s .Td / and therefore G is a C 1 -dense open subset of P H s .Td /.
310
13 Genericity of the assumptions
The next lemma completes the proof of Theorem 1.2.3. Lemma 13.1.22. Let az 2 H s .Td /, vz2 2 E ?L2 \ H s .Td /, where E is the finitedimensional linear subspace of C 1 .Td / defined in Lemma 13.1.6. Then ˇn oˇ ˇ ˇ z (13.1.40) ˇ v1 2 EW v1 C vz2 ; az 2 G nG ˇ D 0 ; where Gz is defined in (1.2.35). Proof. We may suppose that
˚ Wvz2 ;za WD v1 2 EW v1 C vz2 ; az 2 G ¤ ; ;
otherwise (13.1.40) is trivial. Since G is open in H s .Td /, the set Wvz2 ;za is an open subset of E and, in order to deduce (13.1.40), it is enough to prove that, for any vz1 2 Wvz2 ;za , there is an open neighborhood Wvz0 1 Wvz2 ;za of vz1 such that ˇ˚ ˇˇ ˇ z … Gz ˇ D 0 : (13.1.41) ˇ v1 2 Wvz0 1 W v1 C vz2 ; a Since .z v1 C vz2 ; az/ 2 G , by the definition of G in (13.1.39), there is .VN ; a/ N 2 G .3/ such that \ d \ (13.1.38) s vz1 C vz2 ; az 2 GVN ;aN D EL; GMN UVN ;aN ; N M N H T where
UVN ;aN D .B1 C B2 / B is defined in Lemma 13.1.17. Define ˚ Wvz0 1 WD v1 2 EW v1 C vz2 ; az 2 GVN ;aN B1 : Since GVN ;aN is open, Wvz0 1 Wvz2 ;za is an open neighborhood of vz1 2 Wvz2 ;za . Now for all .V; a/ 2 GVN ;aN , the twist condition (1.2.12) and the nondegeneracy properties (1.2.21) and (1.2.22) hold and, by the definition of EL; N M N in (13.1.21) and by (13.1.29), there is 0 D 0 .V; a/ > 0 such that (1.2.7) and (1.2.16)–(1.2.19) hold N Thus, recalling the definition of the sets G . ; vz2 / in Lemma 13.1.19 for all j`j L. z and G in (1.2.35), and (13.1.29), we have ! \ [ GN G ; vz2 Gz ; V ;aN
>0
so that n v1 2
Wvz0 1 W
! o [ v1 C vz2 ; az … Gz B1 n G ; vz2 : >0
Hence, by (13.1.32), the measure estimate (13.1.41) holds. This completes the proof of (13.1.40).
Appendix A
Hamiltonian and reversible PDEs In this appendix we first introduce the concept of Hamiltonian and/or reversible vector field. Then we shortly review the Hamiltonian and/or reversible structure of some classical PDE.
A.1 Hamiltonian and reversible vector fields Let E be a real Hilbert space with scalar product h ; i. Endow E with a constant exact symplectic 2-form .z; w/ D hJN z; wi;
8z; w 2 E ;
where JN W E ! E is a nondegenerate, antisymmetric linear operator. Then, given a Hamiltonian function H W D .H / E ! R, we associate the Hamiltonian system ut D XH .u/ where dH.u/Œ D .XH .u/; /
(A.1.1)
formally defines the Hamiltonian vector field XH . The vector field XH W E1 E ! E is, in general, well defined and smooth only on a dense subspace E1 E. A continuous curve Œt0 ; t1 3 t 7! u.t/ 2 E is a solution of (A.1.1) if it is C 1 as a map from Œt0 ; t1 7! E1 and ut .t/ D XH .u.t// in E;
8t 2 Œt0 ; t1 :
If, for all u 2 E1 , there is a vector ru H.u/ 2 E such that the differential writes dH.u/Œh D hru H.u/; hi;
8h 2 E1 ;
(A.1.2)
(since E1 is not in general a Hilbert space with the scalar product h; i then (A.1.2) does not follow by the Riesz theorem), then the Hamiltonian vector field XH W E1 7! E writes XH D J ru H; J WD JN 1 : (A.1.3) In PDE applications that we shall review below, usually E D L2 and the dense subspace E1 belongs to the Hilbert scale formed by the Sobolev spaces of periodic functions ( ) X X s ij x 2 2 2s uj e W kuks WD juj j 1 C jj j < C1 H WD u.x/ D j 2Zd
j 2Zd
312
A Hamiltonian and reversible PDEs
for s 0, or by the spaces of analytic functions ( ) X X ;s ij x 2 2 2jj j 2s uj e W kuk;s WD juj j e H WD u.x/ D 1 C jj j < C1 j 2Zd
j 2Zd
for > 0. For s > d=2 the spaces H s and H ;s are an algebra with respect to the product of functions. We refer to [102] for a general functional setting of Hamiltonian PDEs on scales of Hilbert spaces. Reversible vector field. A vector field X is reversible if there exists an involution S of the phase space, i.e a linear operator of E satisfying S 2 D Id, such that X ı S D S ı X :
(A.1.4)
Such a condition is equivalent to the relation ˆt ı S D S ı ˆt for the flow ˆt associated to the vector field X . For a reversible equation it is natural to look for “reversible” solutions u.t/ of ut D X.u/, namely such that u.t/ D S u.t/ : If S is antisymplectic, i.e., S D , then a Hamiltonian vector field X D XH is reversible if and only if the Hamiltonian H satisfies H ıS DH : Remark A.1.1. The possibility of developing KAM theory for reversible systems was first observed for finite-dimensional systems by Moser in [107], see [121] for a complete presentation. In infinite dimensions, the first KAM results for reversible PDEs were obtained in [130]. We now present some examples of Hamiltonian and/or reversible PDE.
A.2 Nonlinear wave and Klein–Gordon equations We consider the NLW equation yt t y C V .x/y D f .x; y/;
x 2 Td WD .R=2Z/d ; y 2 R ;
(A.2.1)
with a real-valued multiplicative potential V .x/ 2 R. If V .x/ D m is constant, (A.2.1) is also called a nonlinear Klein–Gordon equation.
A.2 Nonlinear wave and Klein–Gordon equations
313
The NLW equation (A.2.1) can be written as the first-order Hamiltonian system
d y p 0 Id ry H.y; p/ D D ; rp H.y; p/ y V .x/y C f .x; y/ Id 0 dt p where ry H , rp H denote the L2 .Tdx /-gradient of the Hamiltonian Z 1 p2 C .rx y/2 C V .x/y 2 C F .x; y/ dx H.y; p/ WD d 2 2 T
(A.2.2)
with potential density Z
y
f .x; z/ dz
F .x; y/ WD 0
and rx y WD .@x1 y; : : : ; @xd y/ :
Thus for the NLW equation E D L2 L2 with L2 WD L2 .Td ; R/, the symplectic matrix
0 Id J D JN D Id 0 and h ; i is the L2 real scalar product. The variables .y; p/ are “Darboux coordinates”. Remark A.2.1. The transposed operator JN > D JN (with respect to h ; i) and JN 1 D JN > . The Hamiltonian H is properly defined on the subspaces E1 WD H s H s , s 2, so that the Hamiltonian vector field is a map J ru H W H s H s ! H s2 H s2 L2 L2 : Note that the loss of two derivatives is only due to the Laplace operator and that the Hamiltonian vector field generated by the nonlinearity is bounded, because the composition operator y.x/ 7! f .x; y.x// (A.2.3) is a map of H s into itself. For this reason, the PDE (A.2.1) is semilinear. If the nonlinearity f .x; u/ is analytic in the variable u, and H s0 , s0 > d=2, in the space variable x, then the composition operator (A.2.3) is analytic from H s0 in itself. It is only finitely many times differentiable on H s0 if the nonlinearity f .x; u/ is only (sufficiently) many times differentiable with respect to u. Remark A.2.2. The regularity of the vector field is relevant for KAM theory: for finite-dimensional systems, it has been rigorously proved that if the vector field is not sufficiently smooth, then all the invariant tori could be destroyed and only discontinuous Aubry–Mather invariant sets survive (see, e.g., [87]).
314
A Hamiltonian and reversible PDEs
If the potential density F .x; y; rx y/ in (A.2.2) also depends on the first-order derivative rx y, we obtain a quasilinear wave equation. For simplicity we write explicitly only the equation in dimension d D 1. Given the Hamiltonian Z 2 1 p H.y; p/ WD C yx2 C V .x/y 2 C F .x; y; yx / dx 2 T 2 we derive the Hamiltonian wave equation yt t yxx C V .x/y D f .x; y; yx ; yxx / with nonlinearity (denoting by .x; y; / the independent variables of the potential density .x; y; / 7! F .x; y; /) given by d ˚ .@ F /.x; y; yx / dx D .@y F /.x; y; yx / C .@ x F /.x; y; yx / C .@y F /.x; y; yx /yx C .@ F /.x; y; yx /yxx :
f .x; y; yx ; yxx / D .@y F /.x; y; yx / C
Note that f depends on all the derivatives y; yx ; yxx , but it is linear in the secondorder derivatives yxx , i.e., f is quasilinear. The nonlinear composition operator y.x/ 7! f .x; y.x/; yx .x/; yxx .x// maps H s ! H s2 , i.e., loses two derivatives. Derivative wave equations. If the nonlinearity f .x; y; yx / depends only on firstorder derivatives, then the Hamiltonian structure of the wave equation is lost (at least the usual one). However such an equation can admit a reversible structure. Consider the derivative wave, or, better called, derivative Klein–Gordon equation yt t yxx C my D f .x; y; yx ; yt /;
x 2 T;
where the nonlinearity also depends on the first-order space and time derivatives .yx ; yt /, and write it as the first-order system
d y p D : yxx my C f .x; y; yx ; p/ dt p Its vector field X is reversible (see (A.1.4)) with respect to the involution S W .y; p/ ! .y; p/;
resp: S W .y.x/; p.x// ! .y.x/; p.x// ;
assuming the reversibility condition f .x; y; yx ; p/ D f .x; y; yx ; p/;
resp: f .x; y; yx ; p/ D f .x; y; yx ; p/ :
KAM results have been obtained for reversible derivative wave equations in [19].
A.3 Nonlinear Schr¨odinger equation
315
A.3 Nonlinear Schr¨odinger equation Consider the Hamiltonian Schr¨odinger equation iut u C V .x/u D f .x; u/;
x 2 Td ; u 2 C ;
(A.3.1)
where f .x; u/ D @uN F .x; u/ and the potential F .x; u/ 2 R, 8u 2 C, is real-valued. The NLS equation (A.3.1) can be written as the infinite-dimensional complex Hamiltonian equation Z ut D iruN H.u/; H.u/ WD jruj2 C V .x/juj2 F .x; u/ dx : Td
Actually (A.3.1) is a real Hamiltonian PDE. In the variables .a; b/ 2 R2 , real and imaginary part of u D a C ib ; denoting the real-valued potential W .a; b/ WD F .x; a C ib/ so that 1 @uN F .x; a C ib/ WD .@a C i@b /W .a; b/ ; 2 the NLS equation (A.3.1) reads 1 0 1
b V .x/b C W .a; b/ @ b 1 0 Id d a ra H.a; b/ C B 2 D@ AD rb H.a; b/ 1 dt b 2 Id 0 a C V .x/a @a W .a; b/ 2 with real-valued Hamiltonian H.a; b/ WD H.a C ib/ and ra ; rb denote the L2 real gradients. The variables .a; b/ are “Darboux coordinates”. A simpler and often studied Hamiltonian pseudodifferential model equation is iut u C V u D @uN F .x; u/;
x 2 Td ; u 2 C ;
(A.3.2)
where the convolution potential V u is the Fourier multipliers operator X X uj eij x 7! V u WD Vj uj eij x u.x/ D j 2Zd
j 2Zd
with real-valued Fourier multipliers Vj 2 R. The Hamiltonian of (A.3.2) is Z jruj2 C .V u/ uN F .x; u/ dx : H.u/ WD Td
Also for the NLS equation (A.3.1), the Hamiltonian vector field loses two derivatives because of the Laplace operator . On the other hand, the nonlinear Hamiltonian
316
A Hamiltonian and reversible PDEs
vector field i@uN F .x; u/ is bounded, so that the PDE (A.3.1), as well as (A.3.2), is a semilinear equation. If the nonlinearity depends also on first- and second-order derivatives, we have, respectively, derivative NLS (DNLS) and fully nonlinear (or quasilinear) Schr¨odinger equations. For simplicity we present the model equations only in dimension d D 1. For the DNLS equation iut C uxx D f .x; u; ux / (A.3.3) the Hamiltonian structure is lost (at least the usual one). However (A.3.3) is reversible with respect to the involution S W u 7! uN (see (A.1.4)) if the nonlinearity f satisfies the condition f .x; u; N uN x / D f .x; u; ux / : KAM results for DNLS equations have been proved in [130]. On the other hand, fully nonlinear, or quasilinear, perturbed NLS equations like iut D uxx C "f .!t; x; u; ux ; uxx / may be both reversible or Hamiltonian. They have been considered in [65].
A.4 Perturbed KdV equations An important class of equations that arises in fluid mechanics concerns nonlinear perturbations ut C uxxx C @x u2 C N .x; u; ux ; uxx ; uxxx / D 0; of the KdV equation
x 2 T;
ut C uxxx C @x u2 D 0 :
(A.4.1) (A.4.2)
Here the unknown u.x/ 2 R is real-valued. If the nonlinearity N .x; u; ux ; uxx ; uxxx / does not depend on uxxx such an equation is semilinear. The most general Hamiltonian (local) nonlinearity is N .x; u; ux ; uxx ; uxxx / WD @x .@u f /.x; u; ux / @x ..@ux f /.x; u; ux // (A.4.3) which is quasilinear. In this case (A.4.1) is the Hamiltonian PDE Z 2 ux u3 ut D @x ru H.u/; H.u/ D C f .x; u; ux / dx ; 3 T 2 where ru H denotes the L2 .Tx / gradient.
(A.4.4)
A.4 Perturbed KdV equations
317
Z u.x/ dx is a prime integral of (A.4.1)–(A.4.3) and a natural phase
The “mass” T
space is
H01 .Tx /
Z
WD u.x/ 2 H .T; R/W 1
u.x/dx D 0 : T
Thus for the Hamiltonian PDE (A.4.4) we have
Z E D L20 .T; R/ WD u 2 L2 .T; R/W u.x/ dx D 0 ; T
JN D @1 x ;
and h ; i is the L2 real scalar product. Note that J D JN 1 D @x see (A.1.3). The nonlinear Hamiltonian vector field is defined and smooth on the subspaces E1 D H0s .T/, s 3, because
N .x; u; ux ; uxx ; uxxx /W H0s .T/ 7! H0s3 .T/ L20 .T/ : Note that if the Hamiltonian density f D f .x; u/ does not depend on the first-order derivative ux , the nonlinearity in (A.4.3) reduces to N D @x Œ.@u f /.x; u/ and so the PDE (A.4.1)–(A.4.3) is semilinear. Birkhoff coordinates. The KdV equation (A.4.2) has very special algebraic properties: it possesses infinitely many analytic prime integrals in involution, i.e., pairwise commuting, and it is integrable in the strongest possible sense, namely it possesses global analytic action-angle variables, called the Birkhoff coordinates. The whole infinite-dimensional phase space is foliated by quasiperiodic and almost-periodic solutions. The quasiperiodic solutions are called the “finite gap” solutions. Kappeler and collaborators (see for example [97] and references therein) proved that there exists an analytic symplectic diffeomorphism ‰ 1 W H0N .T/ ! `2N C 1 .R/ `2N C 1 .R/ ; 2
2
from the Sobolev spaces H0N .T/ WD H N .T/ \ L20 .T/, N 0, to the spaces of coordinates .x; y/ 2 `2N C 1 .R/ `2N C 1 .R/ equipped with the canonical symplectic 2 2 X dxn ^ dyn , where form n1
n X `2s .R/ WD x WD .xn /n2N ; xn 2 RW
n1
o xn2 n2s < C1 ;
such that, for N D 1, the KdV Hamiltonian Z 2 ux u3 dx ; HKdV .u/ D 3 T 2
318
A Hamiltonian and reversible PDEs
expressed in the new coordinates, i.e., K WD HKdV ı ‰, depends only on xn2 C yn2 , n 1 (actions). The KdV equation appears therefore, in these new coordinates, as an infinite chain of anharmonic oscillators, whose frequencies !n .I / WD @In K.I /; I D .I1 ; I2 ; : : :/; In D xn2 C yn2 =2 depend on their amplitudes in a nonlinear and real analytic fashion. This is the basis for studying perturbations of the finite gap solutions of KdV. Remark A.4.1. The existence of Birkhoff coordinates is much more than is needed for local KAM perturbation theory. As noted by Kuksin in [101] and [102], it is sufficient that the unperturbed torus is reducible, namely that the linearized equation at the quasiperiodic solution have, in a suitable set of coordinates, constant coefficients. Other integrable Hamiltonian PDEs that possess Birkhoff coordinates are the mKdV equation, see [98], ut C uxxx C @x u3 D 0;
x 2 T;
(A.4.5)
and the cubic 1-dimensional NLS equation, see [79], iut D uxx C juj2 u;
x 2 T:
(A.4.6)
Appendix B
Multiscale Step In this appendix we provide the proof of the multiscale step proposition B.2.4 proved in [24], which is used to prove Proposition 5.3.4. Notation. In this appendix we use the notation of [24], in particular be aware that the Definitions B.2.1, B.2.2, and B.2.3 below of N -good/bad matrix, regular/singular sites, and .A; N /-good/bad site are different with respect to those introduced in Section 5.3 that are used in the monograph. In order to give a self-contained presentation, we first prove the properties of the decay norms introduced in [24], recalled in Section 4.3.
B.1 Matrices with off-diagonal decay Let ei D ei.`'Cj x/ for i WD .`; j / 2 Zb WD Z Zd . In the vector space Hs D Hs .T Td I Cr / defined in (4.3.2) with D jSj, we consider the basis ek D ei ea ;
k WD .i; a/ 2 Zb I ;
(B.1.1)
1 ; : : : ; 0/ 2 Cr , a D 1; : : : ; r, denote the canonical basis of where ea WD .0; : : : ; „ƒ‚… Cr , and
at h
I WD f1; : : : ; rg : Then we write any u 2 Hs as uD
X
uk ek ;
uk 2 C :
k2Zb I
For B Zb I, we introduce the subspace n o s HB WD u 2 Hs W uk D 0 if k … B : s does not depend on s and will be denoted HB . We When B is finite, the space HB define
…B W Hs ! HB the L2 orthogonal projector onto HB .
320
B Multiscale Step
In what follows B; C; D; E are finite subsets of Zb I. We identify the space LB C of the linear maps LW HB ! HC with the space of matrices n o k0 k0 MB C WD M D .Mk /k 0 2B;k2C ; Mk 2 C according to the following usual definition. B Definition B.1.1. The matrix M 2 MB C represents the linear operator L 2 LC , if
8k 0 D .i 0 ; a0 / 2 B; k D .i; a/ 2 C;
0
…k Lek 0 D Mkk ek ;
0
where ek are defined in (B.1.1) and Mkk 2 C. Example.
The multiplication operator for
p.'; x/ q.'; x/ ; q.'; x/ p.'; x/
acting in Hs .T Td I C2 /, is represented by the matrix pi i 0 i0 i0 T WD .Ti /i;i 02Zb where Ti D .q/i i 0
qi i 0 pi i 0
(B.1.2)
and pi ; qi denote the Fourier coefficients of p.'; x/; q.'; x/. With the above notation, the set I D f1; 2g and .i 0 ;1/
T.i;1/ D pi i 0 ;
.i 0 ;2/
T.i;1/ D qi i 0 ;
.i 0 ;1/
T.i;2/ D .q/i i 0 ;
.i 0 ;2/
T.i;2/ D pi i 0 :
Notation. For any subset B of Zb I, we denote by B WD projZb B
(B.1.3)
the projection of B in Zb . 0 Given B B 0 , C C 0 Zb I and M 2 MB C 0 we can introduce the restricted matrices MCB WD …C MjHB ; MC WD …C M; M B WD MjHB : (B.1.4) If D projZb B 0 , E projZb C 0 , then we define MED
as MCB
where B WD .D I/ \ B 0 ; C WD .E I/ \ C 0 :
(B.1.5)
In the particular case D D fi 0 g, E WD fi g, i; i 0 2 Zb , we use the simpler notation Mi WD Mfi g M
i0
WD M
fi 0 g
.it is either a line or a group of 2; : : : ; r lines of M / ;
(B.1.6)
.it is either a column or a group of 2; : : : ; r columns of M / ; (B.1.7)
B.1 Matrices with off-diagonal decay
and
0
0
Mii WD Mfifig g I
321
(B.1.8)
0
Mii is a m m0 complex matrix, where m 2 f1; : : : ; rg (resp. m0 2 f1; : : : ; rg) is the cardinality of C (resp. of B) defined in (B.1.5) with E WD fi g (resp. D D fi 0 g). We endow the vector space of the m m0 , m; m0 2 f1; : : : ; rg, complex matrices with a norm j j such that jU W j jU kW j ; whenever the dimensions of the matrices make their multiplication possible, and jU j jV j if U is a submatrix of V . Remark B.1.2. The notation in (B.1.5), (B.1.6), (B.1.7), and (B.1.8) may be not very specific, but it is deliberate: it is convenient not to distinguish the index a 2 I, which is irrelevant in the definition of the s-norms, in Definition B.1.3. We also set the L2 operatorial norm B M WD C 0
sup
B M h C 0
h2HB ;h¤0
khk0
;
(B.1.9)
where k k0 WD k kL2 . Definition B.1.3 (s-norm). The s-norm of a matrix M 2 MB C is defined by X ŒM.n/2 hni2s (B.1.10) jM j2s WD n2Zb
where hni WD max.jnj; 1/, 8 ˇ i0ˇ ˇM ˇ < max i ŒM.n/ WD i i 0 Dn;i 2C ;i 0 2B :0
if n 2 C B (B.1.11) if n … C B ;
with B WD projZb B, C WD projZb C (see (B.1.3)). 0 It is easy to check that j js is a norm on MB C . It verifies j js j js 0 , 8s s , and ˇ ˇ ˇ B0 ˇ 0 0 8M 2 MB ; 8B B; C C; ˇM 0 C ˇ jM js : C s
The s-norm is designed to estimate the off-diagonal decay of matrices similar to the Toeplitz matrix, which represents the multiplication operator for a Sobolev function.
322
B Multiscale Step
Lemma B.1.4. The matrix T in (B.1.2) with .p; q/ 2 Hs .T Td I C2 / satisfies jT js . k.q; p/ks :
(B.1.12)
Proof. By (B.1.11) and (B.1.2) we get ˇ
ˇ ˇ p 0 qi i 0 ˇ ˇ . jpn j C jqn j C jqN n j : ŒT .n/ WD max ˇˇ i i qNi i 0 pi i 0 ˇ i i 0 Dn Hence, the definition in (B.1.10) implies X X jT j2s D ŒT .n/2hni2s . .jpn j C jqn j C jqN n j/2 hni2s . k.p; q/k2s n2Zb
n2Zb
and (B.1.12) follows.
In order to prove that the matrices with finite s-norm satisfy the interpolation inequalities (B.1.15), and then the algebra property (B.1.16), the guiding principle is the analogy between these matrices and the Toeplitz matrices that represent the multiplication operator for functions. We introduce the set HC of the trigonometric polynomials with positive Fourier coefficients n X HC WD h D h`;j ei.`'Cj x/ with h`;j ¤ 0 o for a finite number of .`; j / only and h`;j 2 RC : Note that the sum and the product of two functions in HC remain in HC . Definition B.1.5. Given M 2 MB C , h 2 HC , we say that M is dominated by h, and we write M h, if (B.1.13) ŒM.n/ hn ; 8n 2 Zb ; 0
in other words if jMii j hi i 0 , 8i 0 2 projZb B, i 2 projZb C . It is easy to check (B and C being finite) that n o jM js D min khks W h 2 HC ; M h 9h 2 HC ; M h and 8s 0;
and
jM js D khks :
B C Lemma B.1.6. For M1 2 MC D , M2 2 MC , M3 2 MD , we have
M1 h1 ; M2 h2 ; M3 h3 H) M1 C M3 h1 C h3 and M1 M2 h1 h2 :
(B.1.14)
B.1 Matrices with off-diagonal decay
323
Proof. Property M1 C M3 h1 C h3 is straightforward. For i 2 projZb D, i 0 2 projZb B, we have ˇ ˇ ˇ ˇ ˇ Xˇ ˇ X ˇ ˇˇ ˇ ˇ 0 0 ˇ q i ˇ ˇ.M1 /q ˇˇ.M2 /i 0 ˇ ˇ.M1 M2 /i ˇ D ˇ .M / .M / 1 2 i qˇ q i i ˇ ˇ ˇ q2C ˇq2C WDprojZb C X .h1 /i q .h2 /qi 0 q2C
X
.h1 /i q .h2 /qi 0 D .h1 h2 /i i 0
q2Zb
implying M1 M2 h1 h2 by Definition B.1.5.
We deduce from (B.1.14) and Lemma 4.5.1, the following interpolation estimates. Lemma B.1.7 (Interpolation). For all s s0 > .d C /=2, there is C.s/ 1, such B that, for any finite subset B; C; D Zb I, for all matrices M1 2 MC D , M2 2 MC , jM1 M2 js C.s0 / jM1 js0 jM2 js C C.s/ jM1 js jM2 js0 ;
(B.1.15)
in particular, jM1 M2 js C.s/ jM1 js jM2 js :
(B.1.16)
Note that the constant C.s/ in Lemma B.1.7 is independent of B, C , D. Lemma B.1.8. For all s s0 > .d C /=2, there is C.s/ 1, such that, for any B finite subset B; C; D Zb I, for all M1 2 MC D , M2 2 MC , we have jM1 M2 js0 C.s0 / jM1 js0 jM2 js0 ;
(B.1.17)
and, 8M 2 MB B , 8n 1, jM n js0 .C.s0 //n1jM jns0 ; jM n js C.s/.C.s0//n1jM jn1 s0 jM js ;
8s s0 :
(B.1.18)
Proof. The first estimate in (B.1.18) follows by (B.1.16) with s D s0 and the second estimate in (B.1.18) is obtained from (B.1.15), using C.s/ 1. s The s-norm of a matrix M 2 MB C also controls the Sobolev H -norm. Indeed, we identify HB with the space Mf0g B of column matrices and the Sobolev norm k ks is equal to the s-norm j js , i.e.
8w 2 HB ;
kwks D jwjs ;
8s 0 :
Then M w 2 HC and the next lemma is a particular case of Lemma B.1.7.
(B.1.19)
324
B Multiscale Step
Lemma B.1.9 (Sobolev norm). 8s s0 there is C.s/ 1 such that, for any finite subset B; C Zb I, for any M 2 MB C , w 2 HB , kM wks C.s0 /jM js0 kwks C C.s/jM js kwks0 :
(B.1.20)
The following lemma is the analogue of the smoothing properties of the projection operators. 0 Lemma B.1.10 (Smoothing). Let M 2 MB C . Then, 8s s 0, 0
Mii D 0;
8ji i 0 j < N
and, for N N0 , i0
Mi D 0;
0
jM js N .s s/ jM js 0 ;
H)
( 0
8ji i j > N
H)
(B.1.21)
0
jM js 0 N s s jM js
(B.1.22)
jM js N sCb kM k0 :
Proof. Estimate (B.1.21) and the first bound of (B.1.22) follow from the definition of the norms j js . The second bound of (B.1.22) follows by the first bound in (B.1.22), 0 noting that jMii j kM k0 , 8i; i 0 , q s s jM js N jM j0 N .2N C 1/b kM k0 N sCb kM k0
for N N0 .
In the next lemma we bound the s-norm of a matrix in terms of the .s C b/-norms of its lines. Lemma B.1.11 (Decay along lines). Let M 2 MB C . Then, 8s 0, jM js .
max
i 2projZb C
jMfi g jsCb
(B.1.23)
(we could replace the index b with any ˛ > b=2). Proof. For all i 2 C WD projZb C , i 0 2 B WD projZb B, 8s 0, ˇ ˇ ˇMfi g ˇ ˇ i0 ˇ m.s C b/ sCb ˇM ˇ ; i 0 sCb hi i i hi i 0 isCb ˇ ˇ where m.s C b/ WD max ˇMfi g ˇsCb . As a consequence, i 2C
jM js D
X n2C B
implying (B.1.23).
!1=2 2
2s
.M Œn/ hni
m.s C b/
X
!1=2 2b
hni
n2Zb
B.1 Matrices with off-diagonal decay
325
The L2 -norm and s0 -norm of a matrix are related. Lemma B.1.12. Let M 2 MC B . Then, for s0 > .d C /=2, kM k0 .s0 jM js0 :
(B.1.24)
Proof. Let m 2 HC be such that M m and jM js D kmks for all s 0, see (B.1.14). Also for H 2 Mf0g C , there is h 2 HC such that H h and jH js D khks , 8s 0. Lemma B.1.6 implies that MH mh and so jMH j0 kmhk0 jmjL1 khk0 .s0 kmks0 khk0 D jM js0 jH j0 ;
8H 2 Mf0g C :
Then (B.1.24) follows (recall (B.1.19)). In the sequel we use the notion of left-invertible operators.
Definition B.1.13 (Left inverse). A matrix M 2 MB C is left invertible if there exists C N 2 MB such that NM D IdB . Then N is called a left inverse of M . Note that M is left invertible if and only if M (considered as a linear map) is injective (then dim HC dim HB ). The left inverses of M are not unique if dim HC > dim HB : they are uniquely defined only on the range of M . We shall often use the following perturbation lemma for left-invertible operators. Note that the bound (B.1.25) for the perturbation in s0 -norm only allows us to estimate the inverse (B.1.28) also in s s0 norm. Lemma B.1.14 (Perturbation of left invertible matrices). If M 2 MB C has a left , then there exists ı.s / > 0 such that, inverse N 2 MC 0 B 8P 2 MB C
with
jN js0 jP js0 ı.s0 / ;
(B.1.25)
the matrix M C P has a left inverse NP that satisfies jNP js0 2jN js0 ;
(B.1.26)
and, 8s s0 , jNP js 1 C C.s/jN js0 jP js0 jN js C C.s/jN j2s0 jP js .s jN js C jN j2s0 jP js : Moreover,
8P 2 MB C
with
(B.1.27) (B.1.28)
kN k0 kP k0 1=2 ;
(B.1.29)
the matrix M C P has a left inverse NP that satisfies kNP k0 2kN k0 :
(B.1.30)
326
B Multiscale Step
Proof. Proof of (B.1.26). The matrix NP D AN with A 2 MB B is a left inverse of M C P if and only if IB D AN.M C P / D A.IB C NP / ; i.e., if and only if A is the inverse of IB C NP 2 MB B . By (B.1.17) and (B.1.25) we have, taking ı.s0 / > 0 small enough, jNP js0 C.s0 / jN js0 jP js0 C.s0 /ı.s0 / 1=.2C.s0// ;
(B.1.31)
where C.s0 / is the constant of (B.1.18). Hence the matrix IB C NP is invertible and NP D AN D .IB C NP /
1
N D
1 X
.1/p .NP /p N
(B.1.32)
pD0
is a left inverse of M C P . Estimate (B.1.26) follows by (B.1.32) and (B.1.31). Proof of (B.1.27). For all s s0 , 8p 1, (B.1.15)
j.NP /p N js .s jN js0 j.NP /p js C jN js j.NP /p js0 (B.1.18)
.s jN js0 .C.s0 / jNP js0 /p1 jNP js C jN js .C.s0 / jNP js0 /p
(B.1.31);(B.1.15)
.s
2p .jN js0 jP js0 jN js C jN j2s0 jP js / :
(B.1.33)
We derive (B.1.27) by jNP js
(B.1.32)
jN js C
X
j.NP /p N js
p1 (B.1.33)
jN js C C.s/.jN js0 jP js0 jN js C jN j2s0 jP js / :
Finally (B.1.30) follows from (B.1.29) as (B.1.26) because the operatorial L2 -norm (see (B.1.9)) satisfies the algebra property kNP k0 kN k0 kP k0 .
B.2 Multiscale step proposition This section is devoted to proving the multiscale step proposition B.2.4. In the whole section, & 2 .0; 1/ is fixed and 0 > 0, ‚ 1 are real parameters, on which we shall impose conditions in Proposition B.2.4. Given ; 0 E Zb I we define diam.E/ WD sup jk k 0 j; k;k 0 2E
d.; 0 / WD
inf
k2 ;k 0 2 0
jk k 0 j ;
B.2 Multiscale step proposition
327
where, for k D .i; a/, k 0 WD .i 0 ; a0 / we set 8 ˆ if i D i 0 ; a ¤ a0 ; 1. k0 For a matrix A 2 ME E we define Diag.A/ WD .ıkk 0 Ak /k;k 0 2E . Proposition B.2.4 (Multiscale step [24]). Assume & 2 .0; 1=2/;
0 > 2 C b C 1;
and, setting WD 0 C b C s0 , 0 2 b > 3. C .s0 C b/C1 /; s1 > 3 C . C b/ C C1 s0 :
C1 2 ;
& > C1 ;
(B.2.3)
(B.2.4) (B.2.5)
For any given ‡ > 0, there exist ‚ WD ‚.‡; s1/ > 0 large enough (appearing in Definition B.2.2), and N0 .‡; ‚; s1/ 2 N such that:
328
B Multiscale Step
8N N0 .‡; ‚; s1/, 8E Zb I with diam.E/ 4N 0 D 4N (see (B.2.2)), if A 2 ME E satisfies (H1) jA Diag.A/js1 ‡ (H2) kA1 k0 .N 0 / (H3) There is a partition of the .A; N /-bad sites B D [˛ ˛ with diam.˛ / N C1 ;
d.˛ ; ˇ / N 2 ;
8˛ ¤ ˇ ;
then A is N 0 -good. More precisely ˇ ˇ 1 0 8s 2 Œs0 ; s1 ; ˇA1 ˇs .N 0 / .N 0 /&s C jA Diag.A/js ; 4 and, for all s s1 , 0 jA1 js C.s/.N 0/ .N 0 /&s C jA Diag.A/js :
(B.2.6)
(B.2.7)
(B.2.8)
The above proposition says, roughly, the following. If A has a sufficient offdiagonal decay (assumption (H1) and (B.2.5)), and if the sites that can not be inserted in good “small” submatrices (of size O.N /) along the diagonal of A are sufficiently separated (assumption (H3)), then the L2 -bound (H2) for A1 implies that the “large” matrix A (of size N 0 D N with as in (B.2.4)) is good, and A1 satisfies also the bounds (B.2.7) in s-norm for s > s1 . Notice that the bounds for s > s1 follow only by information on the N -good submatrices in s1 -norm (see Definition B.2.1) plus the s-decay of A. The link between the various constants is the following: According to (B.2.4) the exponent , which measures the new scale N 0 N , is large with respect to the size of the bad clusters ˛ , i.e., with respect to C1 . The intuitive meaning is that, for large enough, the “resonance effects” due to the bad clusters are “negligible” at the new larger scale. The constant ‚ 1 that defines the regular sites (see Definition B.2.2) must be large enough with respect to ‡ , i.e., with respect to the off-diagonal part T WD A Diag.A/, see (H1) and Lemma B.2.5. Note that in (B.2.4) can be taken large independently of , choosing, for example, 0 WD 3 C 2b. The Sobolev index s1 has to be large with respect to and , according to (B.2.5). This is also natural: if the decay is sufficiently strong, then the “interaction” between different clusters of N -bad sites is weak enough. In (B.2.6) we have fixed the separation N 2 between the bad clusters just for definiteness: any separation N , > 0, would be sufficient. Of course, the smaller > 0 is, the larger the Sobolev exponent s1 has to be. The proof of Proposition B.2.4 is divided in several lemmas. In each of them we shall assume that the hypotheses of Proposition B.2.4 are satisfied. We set
T WD A Diag.A/;
(H1)
jT js1 ‡ :
(B.2.9)
B.2 Multiscale step proposition
329
Call G (resp. B) the set of the .A; N /-good (resp. bad) sites. The partition E DB[G induces the orthogonal decomposition
HE D HB ˚ HG and we write u D uB C uG
where uB WD …B u; uG WD …G u :
We shall denote by IG , resp. IB , the restriction of the identity matrix to HG , resp. HB , according to (B.1.4). The next Lemmas B.2.5 and B.2.6 say that the system Au D h can be nicely reduced along the good sites G, giving rise to a (nonsquare) system A0 uB D Zh, with a good control of the s-norms of the matrices A0 and Z. Moreover A1 is a left inverse of A0 . Lemma B.2.5 (Semireduction on the good sites). Let ‚1 ‡ c0 .s1 / be small B enough. There exist M 2 ME G , N 2 MG satisfying, if N N1 .‡ / is large enough, (B.2.10) jMjs0 cN ; jN js0 c ‚1 ‡ ; for some c WD c.s1 / > 0, and, 8s s0 , jMjs C.s/N 2 N ss0 C N b jT jsCb ; jN js C.s/N N ss0 C N b jT jsCb ;
(B.2.11)
such that Au D h
H)
uG D N uB C Mh :
Moreover uG D N uB C Mh
H)
8k regular; .Au/k D hk :
(B.2.12)
Proof. This proof is based on “resolvent identity” arguments. Step I. There exist ; L 2 ME G satisfying jjs0 C0 .s1 /‚1 ‡;
jLjs0 N ;
(B.2.13)
jLjs C.s/N Css0 ;
(B.2.14)
and, 8s s0 , jjs C.s/N N ss0 C N b jT jsCb ; such that Au D h
H)
uG C u D Lh :
(B.2.15)
330
B Multiscale Step
Fix any k 2 G (see Definition B.2.3). If k is regular, let F WD fkg, and, if k is not regular but .A; N /-regular, let F E such that d.k; EnF / N , diam.F / 4N , AF F is N -good. We have Au D h
E nF AF uE nF D hF F uF C AF 1 uF C QuE nF D AF hF ; F
H) H)
where
(B.2.16)
1 E nF 1 E nF nF AF D AF TF 2 ME : Q WD AF F F F
(B.2.17)
The matrix Q satisfies jQjs1
(B.1.16)
ˇ ˇ 1 ˇ ˇ C.s1 / ˇ AF ˇ jT js1 F
(B.2.1);(B.2.9)
s1
C.s1 /N
0 C&s 1
‡
(B.2.18)
(the matrix AF F is N -good). Moreover, 8s s0 , using (B.1.15) and diam.F / 4N , ˇ ˇ ˇ ˇ 1 ˇ 1 ˇ ˇ ˇ jQjsCb .s ˇ AF jT js0 C ˇ AF ˇ ˇ jT jsCb F F sCb
(B.1.22)
.s
s0
ˇ ˇ ˇ ˇ 1 ˇ 1 ˇ ˇ ˇ N sCbs0 ˇ AF ˇ jT js0 C ˇ AF ˇ jT jsCb F F s0
(B.2.1);(B.2.9)
.s
s0
0 0 N .&1/s0 N sCbC ‡ C N Cs0 jT jsCb :
(B.2.19)
Applying the projector …fkg in (B.2.16), we obtain Au D h
H)
uk C
X
0
kk uk 0 D
k 0 2E
X
0
Lkk hk 0
(B.2.20)
k 0 2E
that is (B.2.15) with (
k0
0 if k 0 2 F 0 Qkk if k 0 2 E n F ; 8h i 0 < F 1 k AF if k 0 2 F WD k :0 if k 0 2 E n F:
k WD 0 Lkk
If k is regular then F D fkg, and, by Definition B.2.2, ˇ kˇ ˇA ˇ ‚ : k
(B.2.21)
(B.2.22)
Therefore, by (B.2.21) and (B.2.17), the k-line of satisfies ˇ ˇ 1 ˇ ˇ Tk ˇ jk js0 Cb ˇ Akk
(B.2.22);(B.2.9) s0 Cb
.s0
‚1 ‡ :
(B.2.23)
331
B.2 Multiscale step proposition
If k is not regular but .A; N /-regular, since d.k; EnF / N we have, by (B.2.21), 0 that kk D 0 for jk k 0 j N . Hence, by Lemma B.1.10, jk js0 Cb
(B.1.21)
(B.2.21)
N .s1 s0 b/ jk js1
N .s1 s0 b/ jQjs1
(B.2.18)
.s1 ‡ N
0 Cs Cb.1&/s 0 1
.s1 ‚1 ‡
(B.2.24)
for N N0 .‚/ large enough. Indeed the exponent 0 Cs0 Cb.1& /s1 < 0 because s1 is large enough according to (B.2.5) and & 2 .0; 1=2/ (recall WD 0 C s0 C b). In both cases, (B.2.23) and (B.2.24) imply that each line k decays like jk js0 Cb .s1 ‚1 ‡;
8k 2 G :
Hence, by Lemma B.1.11, we get jjs0 C 0 .s1 /‚1 ‡ ; which is the first inequality in (B.2.13). Likewise we prove the second estimate in (B.2.13). Moreover, 8s s0 , still by Lemma B.1.11, jjs . sup jk jsCb
(B.2.21)
.
(B.2.19) jQjsCb .s N N ss0 C N b jT jsCb ;
k2G
where WD 0 C s0 C b and for N N0 .‡ /. The second estimate in (B.2.14) follows by jLjs0 N (see (B.2.13)) and 0 (B.1.22) (note that by (B.2.21), since diam F 4N , we have Lkk D 0 for all jk k 0 j > 4N ). Step II. By (B.2.15) we have Au D h
H)
IG C G uG D Lh B uB :
(B.2.25)
By (B.2.13), ˇ ˇ if ‚ is large enough (depending on ‡ , namely on the potential V0 ), we ˇ ˇ have ˇ G ˇ ı.s0 /. Hence, by Lemma B.1.14, IG C G is invertible and s0
ˇ 1 ˇˇ ˇ ˇ ˇ IG C G and, 8s s0 ˇ 1 ˇˇ ˇ ˇ IG C G ˇ
ˇ ˇ ˇ ˇ .s 1 C ˇ G ˇ
(B.1.26)
s
(B.1.26) s0
2;
.s N N ss0 C N b jT jsCb :
(B.2.14) s
By (B.2.25), we have Au D h
(B.2.26)
H)
uG D Mh C N uB
(B.2.27)
332
B Multiscale Step
with
1 M WD IG C G L ; 1 N WD IG C G B ;
(B.2.28)
and estimates (B.2.10) and (B.2.11) follow by Lemma B.1.7, (B.2.26), (B.2.27), (B.2.13), and (B.2.14). Note that uG C u D Lh
”
uG D Mh C N uB :
(B.2.29)
As a consequence, if uG D Mh C N uB then, by (B.2.21), for k regular, 1 X k 0 1 uk C Akk Ak uk 0 D Akk hk ; k 0 ¤k
hence .Au/k D hk , proving (B.2.12). Lemma B.2.6 (Reduction on the bad sites). We have Au D h where
H)
A0 uB D Zh ;
A0 WD AB C AG N 2 MB E ; Z WD IE AG M 2 ME E ;
satisfy
and
ˇ 0ˇ ˇA ˇ c.‚/ ; s ˇ 0 ˇ0 ˇA ˇ C.s; ‚/N .N ss0 C N b jT jsCb / ; s jZjs0 cN ; jZjs C.s; ‚/N 2 .N ss0 C N b jT jsCb / :
Moreover .A1 /B is a left inverse of A0 . Proof. By Lemma B.2.5, Au D h
H) H)
( AG uG C AB uB D h uG D N uB C Mh .AG N C AB /uB D h AG Mh ;
i.e. A0 uB D Zh. Let us prove estimates (B.2.31) and (B.2.32) for A0 and Z.
(B.2.30)
(B.2.31)
(B.2.32)
B.2 Multiscale step proposition
333
Step I. 8 k regular we have A0k D 0, Zk D 0. By (B.2.12), for all k regular, 8h; 8uB 2 HB ; AG .N uB C Mh/ C AB uB D hk ; k
i.e.; .A0 uB /k D .Zh/k ; which implies A0k D 0 and Zk D 0.
Step II. Proof of (B.2.31) and (B.2.32). Call R E the regular sites in E. For all k 2 EnR, we have jAkk j < ‚ (see Definition B.2.2). Then (B.2.9) implies ˇ ˇ ˇAE nR ˇ ‚ C jT j c.‚/ ; s0 s0 (B.2.33) ˇ ˇ ˇAE nR ˇ ‚ C jT j ; 8s s0 : s
s
By Step I and the definition of A0 in (B.2.30) we get ˇ ˇ ˇ ˇ ˇ ˇ ˇ 0ˇ ˇA ˇ D ˇˇA0 ˇˇ ˇˇAB ˇˇ C ˇˇAG N ˇˇ : E nR E nR E nR s s
s
s
Therefore Lemma B.1.7, (B.2.33), (B.2.10), and (B.2.11), imply ˇ ˇ ˇ 0ˇ ˇA ˇ C.s; ‚/N N ss0 C N b jT jsCb and ˇA0 ˇ c.‚/ ; s s 0
proving (B.2.31). The bound (B.2.32) follows similarly. Step III. .A1 /B is a left inverse of A0 . By A1 A0 D A1 .AB C AG N / D IEB C IEG N we get 1 0 A B A D .A1 A0 /B D IBB 0 D IBB proving that .A1 /B is a left inverse of A0 .
C1 /, Now A0 2 MB E and the set B is partitioned in clusters ˛ of size O.N far enough one from another (see (H3)). Then, up to a remainder of very small s0 ˛ , where 0˛ is some norm (see (B.2.37)), A0 is defined by the submatrices .A0 /
0˛ 0 neighborhood of ˛ (the distance between two distinct ˛ and 0ˇ remains large). ˛ . Since A0 has a left inverse with L2 -norm O.N 0 /, so have the submatrices .A0 /
0
˛
Since these submatrices are of size O.N C1 /, the s-norms of their inverse will be C 1 C1 s /, see (B.2.43). By Lemma B.1.14, estimated as O.N C1 s N 0 / D O.N 0 provided is chosen large enough, A0 has a left inverse V with s-norms satisfying (B.2.34). The details are given in the following lemma.
334
B Multiscale Step
Lemma B.2.7 (Left inverse with decay). The matrix A0 defined in Lemma B.2.6 has a left inverse V that satisfies 8s s0 ; jV js .s N 2 CC2.s0 Cb/C1 N C1 s C jT jsCb : (B.2.34) Proof. Define 2 MB E by kk0 where
(
if k; k0 2 [˛ .˛ 0˛ / if .k; k0 / … [˛ ˛ 0˛
.A0 /kk 0 WD 0
˚ 0˛ WD k 2 EW d.k; ˛ / N 2 =4 :
(B.2.35)
(B.2.36)
0 Step I. has a left inverse W 2 ME B with kW k0 2.N / .
We define R WD A0 . By the definitions (B.2.35) and (B.2.36), if d.k 0 ; k/ < N 2 =4 then Rkk 0 D 0 and so (B.1.21)
4s1 N 2.s1 bs0 / jRjs1 b 4s1 N 2.s1 bs0/ jA0 js1 b (B.2.31);(B.2.9) .s1 N 2.s1 bs0 / N N s1 bs0 C N b ‡
jRjs0
.s1 N 2s1
(B.2.37)
for N N0 .‡ / large enough. Therefore 1 (B.1.24) R A .s jRj A1 s0 0 0 B 0 0
(B.2.37);(H2)
.s1
N 2s1 N 0
(B.2.2)
D C.s1 /N 2s1 C
(B.2.5)
1=2
(B.2.38)
0 for N N.s1 /. Since .A1 /B 2 ME B is a left inverse of A (see Lemma B.2.6), 0 Lemma B.1.14 and (B.2.38) imply that D A R has a left inverse W 2 ME B , and
W 0
(B.1.30)
(H2) 2 A1 B 0 2A1 0 2 N 0 :
Step II. W0 2 ME B defined by ( 0 .W0 /kk
Wkk WD 0
0
if k; k 0 2 [˛ ˛ 0˛ if k; k 0 62 [˛ ˛ 0˛
(B.2.39)
(B.2.40)
is a left inverse of and jW0 js C.s/N .sCb/C1 C , 8s s0 . Since W D IB , we prove that W0 is a left inverse of showing that .W W0 / D 0 :
(B.2.41)
335
B.2 Multiscale step proposition
Let us prove (B.2.41). For k 2 B D [˛ ˛ , there is ˛ such that k 2 ˛ , and X k 0 q 0 8k 0 2 B ; .W W0 / k D .W W0 /k qk (B.2.42) q… 0˛
since .W W0 /qk D 0 if q 2 0˛ (see Definition (B.2.40)). 0
0
Case I: k 0 2 ˛ . Then qk D 0 in (B.2.42) and so ..W W0 //kk D 0. 0
Case II: k 0 2 ˇ for some ˇ ¤ ˛. Then, since qk D 0 if q … 0ˇ , we obtain by (B.2.42) that X X k 0 0 (B.2.40) 0 .W W0 /qk qk D Wkq qk .W W0 / k D q2 0ˇ
(B.2.35)
D
X
q2 0ˇ
q
0
0
0
Wk qk D .W /kk D .IB /kk D 0 :
k2E 0
Since diam.0˛ / 2N C1 , definition (B.2.40) implies .W0 /kk D 0 for all jk k 0 j 2N C1 . Hence, 8s 0, (B.2.39)
(B.1.22)
jW0 js .s N .sCb/C1 kW0 k0 .s N .sCb/C1 C :
(B.2.43)
Step III. A0 has a left inverse V satisfying (B.2.34). Now A0 D C R, W0 is a left inverse of , and jW0 js0 jRjs0
(B.2.43);(B.2.37)
C.s1 /N .s0 Cb/C1 C C2s1
(B.2.5)
ı.s0 /
(we use also that > C1 by (B.2.4)) for N N.s1 / large enough. Hence, by Lemma B.1.14, A0 has a left inverse V with jV js0
(B.1.26)
2 jW0 js0
(B.2.43)
CN .s0 Cb/C1 C
(B.2.44)
and, 8s s0 , ˇ ˇ jW0 js C jW0 j2s0 jRjs .s jW0 js C jW0 j2s0 ˇA0 ˇs (B.2.43);(B.2.31) .s N 2 CC2.s0 Cb/C1 N C1 s C jT jsCb (B.1.26)
jV js
.s
proving (B.2.34). Proof of Proposition B.2.4 completed. Lemmata B.2.5, B.2.6, and B.2.7 imply ( uG D Mh C N uB Au D h H) uB D V Zh ;
336
B Multiscale Step
1 A D VZ 1 B A G D M C N V Z D M C N A1 B :
whence
(B.2.45)
Therefore, 8s s0 , ˇ 1 ˇ ˇ ˇA B s
(B.2.45);(B.1.15)
.s
jV js jZjs0 C jV js0 jZjs
N 2C2 C2.s0 Cb/C1 N C1 s C jT jsCb ˛ ˛ s .s .N 0 / 1 .N 0 / 2 C jT js
(B.2.34);(B.2.32);(B.2.9);(B.2.44)
.s
using jT jsCb C.s/.N 0/b jT js (by (B.1.22)) and defining (B.2.46) ˛1 WD 2 C b C 21 C C1 .s0 C b/ ; ˛2 WD 1 C1 : ˇ 1 ˇ We obtain the same bound for ˇ.A /G ˇs . Notice that by (B.2.4) and (B.2.3), the exponents ˛1 ; ˛2 in (B.2.46) satisfy ˛1 < 0 ;
˛2 < & :
(B.2.47)
Hence, for all s s0 , ˇ ˇ ˇ 1 ˇ ˇ ˇ ˇA ˇ ˇ A1 ˇ C ˇ A1 ˇ C.s/.N 0 /˛1 .N 0 /˛2 s C jT j (B.2.48) s s B s G s (B.2.47);(B.2.9) 0 &s C.s/.N 0 / .N 0 / C jA Diag.A/js ; which is (B.2.8). Moreover, for N N.s1 / large enough, we have 8s 2 Œs0 ; s1 ; and by (B.2.48) we deduce (B.2.7).
˛ 1 0 C.s/ N 0 1 N 0 ; 4
Appendix C
Normal form close to an isotropic torus In this appendix we report the results in [25], which are used in Chapter 7. Theorem C.1.4 provides, in a neighborhood of an isotropic invariant torus for an Hamiltonian vector field XK , symplectic variables in which the Hamiltonian K assumes the normal form (C.1.22). It is a classical result of Herman [67, 86] that an invariant torus, densely filled by a quasiperiodic solution, is isotropic, see Lemma C.1.2 and Lemma C.2.4 for a more quantitative version. In view of the Nash–Moser iteration we need to perform an analogous construction for an only “approximately invariant” torus. The key step is Lemma C.2.5, which constructs, near an “approximately invariant” torus, an isotropic torus. This appendix is written with a self-contained character.
C.1 Symplectic coordinates near an invariant torus We consider the toroidal phase space
P WD T R E
where T WD R =.2Z/
is the standard flat torus and E is a real Hilbert space with scalar product h ; i. We denote by u WD .; y; z/ the variables of P . We call .; y/ the “action-angle” variables and z the “normal” variables. We assume that E is endowed with a constant exact symplectic 2-form ˝ ˛ E .z; w/ D JN z; w ; 8z; w 2 E ; (C.1.1) where JN W E ! E is an antisymmetric bounded linear operator with trivial kernel. Thus P is endowed with the symplectic 2-form
which is exact, namely
W WD .dy ^ d / ˚ E ;
(C.1.2)
W D d~ ;
(C.1.3)
where d denotes the exterior derivative and ~ is the Liouville 1-form on P defined by ~.;y;z/W R R E ! R ; ˛ 1˝ O y; O zO WD y O C JN z; zO ; 8.; O z/ O 2 R R E ; ~.;y;z/ O ; y; 2 and the dot “” denotes the usual scalar product of R .
(C.1.4)
338
C Normal form close to an isotropic torus
Given a Hamiltonian function KW D P ! R, we consider the Hamiltonian system ut D XK .u/ where dK.u/Œ D W .XK .u/; / (C.1.5) formally defines the Hamiltonian vector field XK . For infinite-dimensional systems (i.e., PDEs) the Hamiltonian K is, in general, well defined and smooth only on a dense subset D D T R E1 P , where E1 E is a dense subspace of E. We require that, for all .; y/ 2 T R , 8z 2 E1 , the Hamiltonian K admits a gradient rz K, defined by dz K.; y; z/Œh D hrz K.; y; z/; hi; 8h 2 E1 ; (C.1.6) and that rz K.; y; z/ 2 E is in the space of definition of the (possibly unbounded) operator J WD JN 1 . Then by (C.1.5), (C.1.1), (C.1.2), and (C.1.6) the Hamiltonian vector field XK W T R E1 7! R R E writes XK D .@y K; @ K; J rz K/;
J WD JN 1 :
A continuous curve Œt0 ; t1 3 t 7! u.t/ 2 T R E is a solution of the Hamiltonian system (C.1.5) if it is C 1 as a map from Œt0 ; t1 to T R E1 and ut .t/ D XK .u.t//, 8t 2 Œt0 ; t1 . For PDEs, the flow map ˆtK may not be well defined everywhere. The next arguments, however, will not require us to solve the initial value problem, but only a functional equation in order to find quasiperiodic solutions (see (C.1.11)). We suppose that (C.1.5) possesses an embedded invariant torus ' 7! i.'/ WD ..'/; y.'/; z.'//
(C.1.7)
i 2 C 1 .T ; P / \ C 0 .T ; P \ T R E1 / ;
(C.1.8)
which supports a quasiperiodic solution with nonresonant frequency vector ! 2 R . More precisely we have that i ı ‰!t D ˆtK ı i;
8t 2 R ;
(C.1.9)
where ˆtK denotes the flow generated by XK and ‰!t W T ! T ;
‰!t .'/ WD ' C !t ;
(C.1.10)
is the translation flow of vector ! on T . Since ! 2 R is nonresonant, namely ! ` ¤ 0;
8` 2 Z n f0g ;
each orbit of .‰!t / is dense in T . Note that (C.1.9) only requires that the flow ˆtK is well defined and smooth on the compact manifold T WD i.T / P and .ˆtK /jT D i ı ‰!t ı i 1 . This remark is important because, for PDEs, the flow could be ill-posed in a neighborhood of T . From a functional point of view (C.1.9) is equivalent to the equation ! @' i.'/ XK .i.'// D 0 : (C.1.11)
C.1 Symplectic coordinates near an invariant torus
339
Remark C.1.1. In the sequel we will formally differentiate several times the torus embedding i , so that we assume more regularity than (C.1.8). In the framework of a Nash–Moser scheme, the approximate torus embedding solutions i are indeed regularized at each step. We require that W T ! T is a diffeomorphism of T isotopic to the identity. Then the embedded torus T WD i.T / is a smooth graph over T . Moreover the lift on R of is a map W R ! R ;
.'/ D ' C #.'/ ;
(C.1.12)
where #.'/ is 2-periodic in each component 'i , i D 1; : : : ; , with invertible Jacobian matrix D .'/ D Id C D#.'/;
8' 2 T :
In the usual applications D# is small and ! is a Diophantine vector, namely j! `j
; j`j
8` 2 Z n f0g :
The torus T is the graph of the function (see (C.1.7) and (C.1.12)) Q /; zQ . / ; j WD i ı 1 ; j W T ! T R E; j. / WD ; y. namely o n T D ; y. Q /; zQ . / I 2 T ;
(C.1.13)
where yQ WD y ı 1 ; zQ WD z ı 1 : (C.1.14)
We first prove the isotropy of an invariant torus as in [67, 86], i.e., that the 2-form W vanishes on the tangent space to i.T / P , 0 D i W D i d~ D d.i ~/ ;
(C.1.15)
or equivalently the 1-form i ~ on T is closed. Lemma C.1.2. The invariant torus i.T / is isotropic. Proof. By (C.1.9) the pullback i ı ‰!t W D ˆtK ı i W D i W :
(C.1.16)
For smooth Hamiltonian systems in finite dimension .C.1.16/ is true because the 2form W is invariant under the flow map ˆtK (i.e., .ˆtK / W D W ). In our setting, the flow .ˆtK / may not be defined everywhere, but ˆtK is well defined on i.T / by
340
C Normal form close to an isotropic torus
the assumption (C.1.9), and still preserves W on the manifold i.T /, see the proof of Lemma C.2.4 for details. Next, denoting the 2-form X i W .'/ D Aij .'/d'i ^ d'j ; i where zQ . / WD .zı 1 /. / (see (C.1.14)). The transposed operator D zQ . / W E ! R is defined by the duality relation D h iE > D zQ . / w O D w; D zQ . / O ; 8w 2 E; O 2 R : Lemma C.1.3. Let i be an isotropic torus embedding. Then G is symplectic. Proof. We may see G as the composition G WD G2 ı G1 of the diffeomorphisms 0 1 0 0 1 1 ./ @y A D G1 @ A WD @ŒD./> A w z w and
0 1 0 1 0 1 > @y A 7! G2 @y A WD @y. Q / C y D zQ . / JN z A ; z z zQ . / C z
(C.1.18)
C.1 Symplectic coordinates near an invariant torus
341
where yQ WD y ı 1 , zQ WD z ı 1 (see (C.1.14)). We claim that both G1 , G2 are symplectic, whence the lemma follows. G1 is symplectic. Since G1 is the identity in the third component, it is sufficient to check that the map .; / 7! ./; ŒD ./> is a symplectic diffeomorphism on T R , which is a direct calculus. G2 is symplectic. We prove that G2 ~ ~ is closed and so (see (C.1.3)) G2 W D G2 d~ D dG2 ~ D d~ D W : By (C.1.18) and the definition of pullback we have > .G2 ~/.;y;z/ŒO ; y; O z O D y. Q / C y D zQ . / JN z O E 1D O : C JN zQ . / C z ; zO C D zQ . /Œ 2 Therefore (recall (C.1.4)) h i> i ˛ h 1˝ O O zO D y. Q / D zQ . / JN z O C JN zQ . /; zO .G2 ~/.;y;z/ ~.;y;z/ ; y; 2 h iE 1 D h iE 1DN C J zQ . /; D zQ . / O C JN z; D zQ . / O 2 2 D h iE 1 D y. Q / O C JN zQ . /; D zQ . / O 2 h i E ˛ 1D 1˝N C J zQ . /; zO C JN D zQ . / O ; z ; 2 2 (C.1.19) having used that JN > D JN . We first note that the 1-form i h i E ˝ ˛ D ˝ ˛h O y; O y; O zO ; O zO 7! JN zQ . /; zO C JN D zQ . / O ; z D d JN zQ . /; z ;
(C.1.20)
is exact. Moreover
h i h iE 1D y. Q / O C JN zQ . /; D zQ . / O D .j ~/ O 2
(C.1.21)
(recall (C.1.4)), where j WD i ı 1 see (C.1.13). Hence (C.1.19), (C.1.20), and (C.1.21) imply
˛ 1˝ N .G2 ~/.;y;z/ ~.;y;z/ D .j ~/.;y;z/ C d J zQ . /; z ; 2 where W T R E ! T is the canonical projection. Since the torus j.T / D i.T / is isotropic the 1-form j ~ on T is closed (as i ~, see (C.1.15)). This concludes the proof.
342
C Normal form close to an isotropic torus
Since G is symplectic the transformed Hamiltonian vector field G XK WD .DG/1 ı XK ı G D XK ;
K WD K ı G ;
is still Hamiltonian. By construction (see (C.1.17)) the torus f D 0; w D 0g is invariant and (C.1.11) implies XK .; 0; 0/ D .!; 0; 0/ (see also Lemma C.2.6). As a consequence, the Taylor expansion of the transformed Hamiltonian K in these new coordinates assumes the normal form 1 1 K D const C ! C A./ C hC./; wi C hB./w; wi C O3 .; w/ ; (C.1.22) 2 2 where A./ 2 Mat. / is a real symmetric matrix, B./ is a self-adjoint operator of E, C./ 2 L.R ; E/, and O3 .; w/ collects all the terms at least cubic in .; w/. We have proved the following theorem: Theorem C.1.4 (Normal form close to an invariant isotropic torus [25]). Let T D i.T / be an embedded torus, see (C.1.7) and (C.1.8), which is a smooth graph over T , see (C.1.13) and (C.1.14), invariant for the Hamiltonian vector field XK , and on which the flow is conjugate to the translation flow of vector !, see (C.1.9) and (C.1.10). Assume moreover that T is isotropic, a property that is automatically verified if ! is nonresonant. Then there exist symplectic coordinates .; ; w/ in which T is described by T f0g f0g and the Hamiltonian assumes the normal form (C.1.22), i.e., the torus T D G T f0g f0g ; where G is the symplectic diffeomorphism defined in (C.1.17), and the Hamiltonian K ı G has the Taylor expansion (C.1.22) in a neighborhood of the invariant torus. The normal form (C.1.22) is relevant in view of a Nash–Moser approach, because it provides a control of the linearized equations in the normal bundle of the torus. The linearized Hamiltonian system associated to K at the trivial solution .; ; w/.t/ D .!t; 0; 0/ is 8 > ˆ
K11 ./ D Dy rz K.iı .// D0 ./ C JN .D zQ0 /.0 .// Dy2 K .iı .//ŒD0 ./> ; 1 > K20 ./ D D0 ./ .Dy2 K/.iı .// D0 ./ : Proof. Formulas (C.2.18) and (C.2.19) follow differentiating K D K ı Gı .
(C.2.18) (C.2.19)
Bibliography [1] T. Alazard T. and P. Baldi, Gravity capillary standing water waves. Arch. Ration. Mech. Anal. 217(3) (2015), 741–830. [2] V.I. Arnold, Proof of a theorem of A.N. Kolmogorov on the persistence of quasi-periodic motions under small perturbations of the Hamiltonian. Russ. Math. Surv. 18 (1963), 9–36. [3] V.I. Arnold, Instability of dynamical systems with many degrees of freedom. Dokl. Akad. Nauk SSSR 156 (1964), 9–12. [4] V.I. Arnold, Mathematical methods of classical mechanics, Graduate Texts in Mathematics, 60. Springer, 1989, xvi+508. [5] P. Baldi, M. Berti, E. Haus, and R. Montalto, Time quasi-periodic gravity water waves in finite depth. Invent. Math. 214(2) (2018), 739–911. [6] P. Baldi, M. Berti, and R. Montalto, KAM for quasi-linear and fully nonlinear forced perturbations of Airy equation. Math. Ann. 359(1–2) (2014), 471–536. [7] P. Baldi, M. Berti, and R. Montalto, KAM for autonomous quasi-linear perturbations of mKdV, Bollettino Unione Matematica Italiana 9 (2016), 143–188. [8] P. Baldi, M. Berti, and R. Montalto, KAM for autonomous quasi-linear perturbations of KdV, Ann. I. H. Poincar´e (C) Anal. Non Lin´eaire AN 33 (2016), 1589–1638. [9] P. Baldi and E. Haus, A Nash-Moser-H¨ormander implicit function theorem with applications to control and Cauchy problems for PDEs, J. Funct. Anal. 273(12) (2017). [10] D. Bambusi, Reducibility of 1-d Schr¨odinger equation with time quasiperiodic unbounded perturbations, I, Trans. Am. Math. Soc. 370(3) (2018), 1823–1865. [11] D. Bambusi, Reducibility of 1-d Schr¨odinger equation with time quasiperiodic unbounded perturbations, II. Comm. Math. Phys. 353(1) (2017), 353–378. [12] D. Bambusi, M. Berti, and E. Magistrelli, Degenerate KAM theory for partial differential equations, J. Diff. Equ. 250(8) (2011), 3379–3397. [13] D. Bambusi, J.M. Delort, B. Gr´ebert, and J. Szeftel, Almost global existence for Hamiltonian semilinear Klein-Gordon equations with small Cauchy data on Zoll manifolds, Comm. Pure Appl. Math. 60(11) (2007), 1665–1690. [14] D. Bambusi and R. Montalto, Reducibility of 1-d Schr¨odinger equation with unbounded time quasi-periodic perturbations. III, J. Math. Phys. 59(12) (2018). [15] M. Berti, Nonlinear Oscillations of Hamiltonian PDEs, Progr. Nonlinear Differ. Equ. Appl. 74. Birkh¨auser, Boston, 2008, 1–181. [16] M. Berti, KAM Theory for Partial Differential Equations. Anal. Theory Appl. 35(3) (2019), 235–267. [17] M. Berti and L. Biasco, Branching of Cantor manifolds of elliptic tori and applications to PDEs. Comm. Math. Phys. 305(3) (2011), 741–796. 2011. [18] M. Berti, L. Biasco, and M. Procesi, KAM theory for the Hamiltonian derivative wave ´ Norm. Sup´er. (4) 46(2) (2013), 301–373. equation, Ann. Sci. Ec. [19] M. Berti, L. Biasco, and M. Procesi, KAM for Reversible Derivative Wave Equations, Arch. Ration. Mech. Anal. 212(3) (2014), 905–955.
350
Bibliography
[20] M. Berti and P. Bolle, Cantor families of periodic solutions for completely resonant nonlinear wave equations. Duke Math. J. 134(2) (2006), 359–419. [21] M. Berti and P. Bolle, Cantor families of periodic solutions of wave equations with C k nonlinearities. NoDEA Nonlinear Differ. Equ. Appl. 15 (2008), 247–276. [22] M. Berti and P. Bolle, Sobolev Periodic Solutions of Nonlinear Wave Equations in Higher Spatial Dimensions. Arch. Ration. Mech. Anal. 195 (2010), 609–642. [23] M. Berti and P. Bolle, Sobolev quasi periodic solutions of multidimensional wave equations with a multiplicative potential. Nonlinearity 25 (2012), 2579–2613. [24] M. Berti and P. Bolle, Quasi-periodic solutions with Sobolev regularity of NLS on Td with a multiplicative potential. J. Eur. Math. Soc. 15 (2013), 229–286. [25] M. Berti and P. Bolle, A Nash-Moser approach to KAM theory. Fields Institute Communications, special volume “Hamiltonian PDEs and Applications” (2015), 255–284. [26] M. Berti, P. Bolle, and M. Procesi, An abstract Nash-Moser theorem with parameters and applications to PDEs. Annales de l’Institute H. Poincar´e, Analyse nonlinaire 27 (2010), 377– 399. [27] M. Berti, L. Corsi, and M. Procesi, An abstract Nash-Moser Theorem and quasi-periodic solutions for NLW and NLS on compact Lie groups and homogeneous manifolds. Comm. Math. Phys. 334(3) (2015), 1413–1454. [28] M. Berti and J.-M. Delort, Almost global existence for capillarity-gravity water waves equations on the circle. UMI Lecture Notes, 24, Springer, 2018. ISBN 978-3-319-99485-7. [29] M. Berti, T. Kappeler, and R. Montalto, Large KAM tori for perturbations of the dNLS equation. Ast´erisque 403 (2018), viii + 148. [30] M. Berti, T. Kappeler, and R. Montalto, Large KAM tori for quasi-linear perturbations of KdV, (2019), arxiv.org/abs/1908.08768. [31] M. Berti and A. Maspero, Long time dynamics of Schr¨odinger and wave equations on flat tori. J. Differ. Equ. 267(2, 5) (2019), 1167–1200. [32] M. Berti and R. Montalto, Quasi-periodic Standing Wave Solutions of Gravity-Capillary Water Waves. Memoires AMS 263(MEMO 1273) (2020), https://doi.org/10.1090/memo/1273, ISBN: 978-1-4704-4069-5. [33] M. Berti and M. Procesi, Nonlinear wave and Schr¨odinger equations on compact Lie groups and homogeneous spaces. Duke Math. J. 159(3) (2011), 479–538. [34] A. Bobenko A. and S. Kuksin, The nonlinear Klein-Gordon equation on an interval as a perturbed sine-Gordon equation. Comment. Math. Helv. 70(1) (1995), 63–112. [35] P. Bolle and C. Khayamian, Quasi-periodic solutions for wave equations on Zoll manifolds, preprint 2018. [36] J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and applications to nonlinear PDE. Int. Math. Res. Not. 11 (1994). [37] J. Bourgain, Construction of periodic solutions of nonlinear wave equations in higher dimension. GAFA 5(4) (1995), 629–639. [38] J. Bourgain, On Melnikov’s persistency problem. Int. Math. Res. Letters 4 (1997), 445–458. [39] J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations of 2D linear Schr¨odinger equations. Annals of Math. 148 (1998), 363–439.
Bibliography
351
[40] J. Bourgain, Periodic solutions of nonlinear wave equations, Harmonic analysis and partial differential equations, Chicago Lectures in Math., Univ. Chicago Press, 1999, 69–97. [41] J. Bourgain, Estimates on Green’s functions, localization and the quantum kicked rotor model. Annals of Math. 156(1) (2002), 249–294. [42] J. Bourgain, Green’s function estimates for lattice Schr¨odinger operators and applications. Annals of Mathematics Studies 158, Princeton University Press, Princeton, 2005. [43] J. Bourgain, On invariant tori of full dimension for 1D periodic NLS. J. Funct. Anal. 229 (2005), 62–94. [44] J. Bourgain, M. Goldstein, and W. Schlag, Anderson localization for Schr¨odinger operators on Z2 with quasi-periodic potential. Acta Math. 188 (2002), 41–86. [45] J. Bourgain and W.M. Wang, Anderson localization for time quasi-periodic random Schr¨odinger and wave equations. Comm. Math. Phys. 248 (2004), 429–466. [46] J. Bourgain and W.M. Wang, Quasi-periodic solutions for nonlinear random Schr¨odinger. J. European Math. Society 10 (2008), 1–45. [47] H. Broer, G. Huitema, and M. Sevryuk, Quasi-periodic motions in families of dynamical systems Order amidst chaos. Lecture Notes in Mathematics, 1645. Springer, Berlin, 1996, xii+196. [48] L. Chierchia and J. You, KAM tori for 1D nonlinear wave equations with periodic boundary conditions. Comm. Math. Phys. 211 (2000), 497–525. [49] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao, Transfer of energy to high frequencies in the cubic defocusing nonlinear Schr¨odinger equation. Invent. Math. 181(1) (2010), 39–113. [50] L. Corsi, R. Feola, and M. Procesi, Finite dimensional invariant KAM tori for tame vector fields. preprint, Trans. Amer. Math. Soc. 372 (2019), 1913–1983. [51] L. Corsi and R. Montalto, Quasi-periodic solutions for the forced Kirchoff equation on Td . Nonlinearity 31(11) (2018), 5075–5109. [52] W. Craig, Probl`emes de petits diviseurs dans les e´ quations aux d´eriv´ees partielles. Panoramas et Synth`eses 9. Soci´et´e Math´ematique de France, Paris, 2000. [53] W. Craig and C. Sulem, Numerical simulation of gravity waves, J. Comput. Phys. 108(1) (1993), 73–83. [54] W. Craig and C.E. Wayne, Newton’s method and periodic solutions of nonlinear wave equation. Comm. Pure Appl. Math. 46 (1993), 1409–1498. [55] J.M. Delort, Periodic solutions of nonlinear Schr¨odinger equations: a para-differential approach. Analysis and PDEs 4(5) (2011), 639–676. [56] H. Eliasson, Perturbations of stable invariant tori for hamiltonian systems. Annali della Scuola Normale Superiore di Pisa 15(1) (1988), 115–147. [57] H. Eliasson, Almost Reducibility for the Quasi-Periodic Linear Wave Equation. The conference “In Memory of Jean-Christophe Yoccoz”, 2017. https://www.college-de-france.fr/site/ en-pierre-louis-lions/symposium-2017-05-29-10h00.htm [58] H. Eliasson, B. Gr´ebert, and S. Kuksin, Almost reducibility of the linear wave equation, in preparation. [59] H. Eliasson, B. Gr´ebert, and S. Kuksin, KAM for the nonlinear beam equation. Geom. Funct. Anal. 26 (2016), 1588–1715.
352
Bibliography
[60] L.H. Eliasson and S. Kuksin, On reducibility of Schr¨odinger equations with quasiperiodic in time potentials. Comm. Math. Phys. 286 (2009), 125–135. [61] H. Eliasson and S. Kuksin, KAM for non-linear Schr¨odinger equation. Annals of Math. 172 (2010), 371–435. [62] Ekeland I., Convexity Methods in Hamiltonian Mechanics, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) 19. Springer, Berlin, 1990, x+247. [63] I. Ekeland and E. Ser´e, A surjection theorem for singular perturbations with loss of derivatives. (2019), to appear in J. Eur. Math. Soc. (JEMS), arXiv:1811.07568v2. [64] J. Feldman, H. Kn¨orrer, and E. Trubowitz, Perturbatively unstable eigenvalues of a periodic Schr¨odinger operator, Comment. Math. Helv. 4 (1991), 557–579. [65] R. Feola and M. Procesi, Quasi-periodic solutions for fully nonlinear forced reversible Schr¨odinger equations. J. Diff. Eq. 259(7) (2015), 3389–3447. [66] R. Feola and M. Procesi, KAM for quasi-linear autonomous NLS, preprint 2018, arXiv:1705.07287. [67] J. Fejoz, D´emonstration du th´eor`eme d’ Arnold sur la stabilit´e du syst`eme plan´etaire (d’ apr`es M. Herman). Ergod. Theory Dyn. Syst. 24 (2004), 162. [68] E. Fontich, R. De la Llave, and Y. Sire, Construction of invariant whiskered tori by a parameterization method. Part II: Quasi-periodic and almost periodic breathers in coupled map lattices. J. Differ. Equ. 259(6) (2015), 2180–2279. [69] J. Fr¨ohlich and T. Spencer, Absence of diffusion in the Anderson tight binding model for large disorder or low energy. Comm. Math. Phys. 88(2) (1983), 151–184. [70] J. Geng and W. Hong, Invariant tori of full dimension for second KdV equations with the external parameters. J. Dyn. Differ. Equ. 29(4) (2017), 1325–1354. [71] J. Geng and J. You, A KAM theorem for one dimensional Schr¨odinger equation with periodic boundary conditions. J. Differ. Equ. 209(1) (2005), 1–56. [72] J. Geng and J. You, KAM tori for higher dimensional beam equations with constant potentials. Nonlinearity 19(10) (2006), 2405–2423. [73] J. Geng and J. You, A KAM theorem for Hamiltonian partial differential equations in higher dimensional spaces. Comm. Math. Phys. 262 (2006), 343–372. [74] J. Geng and X. Xu, Almost periodic solutions of one dimensional Schr¨odinger equation with the external parameters. J. Dyn. Differ. Equ. 25(2) (2013), 435–450. [75] J. Geng, X. Xu, and J. You, An infinite dimensional KAM theorem and its application to the two dimensional cubic Schr¨odinger equation. Adv. Math. 226 (2011), 5361–5402. [76] G. Gentile and M. Procesi, Periodic solutions for a class of nonlinear partial differential equations in higher dimension. Comm. Math. Phys. 3 (2009), 863–906. [77] F. Giuliani, Quasi-periodic solutions for quasi-linear generalized KdV equations. J. Differ. Equ. 262(10) (2017), 5052–5132. [78] S. Graff, On the conservation of hyperbolic invariant tori for Hamiltonian systems. J. Differ. Equ. 15 (1974), 1–69. [79] B. Gr´ebert and T. Kappeler, The defocusing NLS equation and its normal form. EMS Series of Lectures in Mathematics, 2014, x+166. ISBN: 978-3-03719-131-6. [80] B. Gr´ebert and E. Paturel, KAM for the Klein-Gordon equation on S d , Bollettino dell’Unione Matematica Italiana (BUMI) 9(2) (2016), 237–288.
Bibliography
353
[81] B. Gr´ebert and E. Paturel, On reducibility of quantum harmonic oscillators on Rd with a quasi-periodic in time potential, Annales de la Facult´e des sciences de Toulouse : Math´ematiques, Ser. 6, 28(5) (2019), 977–1014. [82] B. Gr´ebert and L. Thomann, KAM for the quantum harmonic oscillator. Comm. Math. Phys. 307(2) (2011), 383–427. [83] M. Guardia, E. Haus, and M. Procesi, Growth of Sobolev norms for the analytic NLS on T2 . Adv. Math. 301 (2016), 615–692. [84] M. Guardia and V. Kaloshin, Growth of Sobolev norms in the cubic defocusing nonlinear Schr¨odinger equation, J. Eur. Math. Soc. (JEMS) 17(1) (2015), 71–149. [85] R.S. Hamilton, The inverse function theorem of Nash and Moser. Bull. A.M.S. 7 (1982), 65–222. [86] M. Herman, In´egalit´es “a priori” pour des tores lagrangiens invariants par des ´ diff´eomorphismes symplectiques. Publ. Math. Inst. Hautes Etud. Sci. 70 (1989), 47–101. [87] M. Herman, Non existence of Lagrangian graphs, available online in Archive Michel Herman: www.college-de-france.fr. [88] L. H¨ormander, The boundary problems of physical geodesy. Arch. Ration. Mech. Anal. 62(1) (1976), 1–52. [89] L. H¨ormander, On the Nash-Moser implicit function theorem. Ann. Acad. Sci. Fenn. Ser. A I Math. 10 (1985), 255–259. [90] H. Ito, Convergence of Birkhoff normal forms for integrable systems. Comment. Math. Helv. 64(3) (1989), 412–46. [91] G. Iooss, P. Plotnikov, and J. Toland, Standing waves on an infinitely deep perfect fluid under gravity. Arch. Ration. Mech. Anal. 177(3) (2005), 367–478. [92] G. Iooss and P. Plotnikov, Small divisor problem in the theory of three-dimensional water gravity waves. Mem. Am. Math. Soc. 200(940) (2009), viii+128. [93] J. Liu and X. Yuan, A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations. Comm. Math. Phys. 307(3) (2011), 629–673. [94] S. Lojasiewicz and E. Zehnder, An inverse function theorem in Fr´echet-spaces. J. Funct. Anal. 33 (1979), 165–174. [95] T. Kappeler and Z. Liang, A KAM theorem for the defocusing NLS equation. J. Diff. Equ. 252(6) (2012), 4068–4113. [96] T. Kappeler and S. Kuksin, Strong non-resonance of Schr¨odinger operators and an averaging theorem. Physica D 86 (1995), 349–362. [97] T. Kappeler and J. P¨oschel, KAM and KdV, Springer-Verlag, 2003, xiv+279. ISBN: 3-54002234-1. [98] T. Kappeler and P. Topalov, Global well-posedness of mKdV in L2 .T; R/. Comm. Partial Differ. Equ. 30(1–3) (2005), 435–449. [99] A.N. Kolmogorov, Preservation of conditionally perodic motions for a small change in Hamilton’s function (Russian). Dokl. Akad. Nauk SSSR 98(4) (1954), 527–530. [100] S. Kuksin, Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum. Funktsional. Anal. i Prilozhen. 21(3) (1987), 22–37. [101] S. Kuksin, A KAM theorem for equations of the Korteweg-de Vries type. Rev. Math. Phys. 10(3) (1998), 1–64.
354
Bibliography
[102] S. Kuksin, Analysis of Hamiltonian PDEs, Oxford University Press, Oxford, 2000. [103] S. Kuksin and J. P¨oschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schr¨odinger equation, Ann. of Math. 143, 149 - 179, 1996. [104] J. Moser, A new technique for the construction of solutions of nonlinear differential equations, Proc. Nat. Acad. Sci. 47, 1824–1831, 1961. [105] J. Moser, On invariant curves of area-preserving mappings of an annulus, Nachr. Akad. Wiss. Gott. Math.-Phys. Kl. II, 1–20, 1962. [106] J. Moser, A rapidly convergent iteration method and non-linear partial differential equations I-II, Ann. Scuola Norm. Sup. Pisa (3) 20, 265–315, 499–535, 1966. [107] J. Moser, Convergent series expansions for quasi-periodic motions, Math. Ann. 169, 136– 176, 1967. [108] J. Nash, The embedding problem for Riemanian manifolds, Ann. of Math. 63, 20–63, 1956. [109] L. Nirenberg, Topics in Nonlinear Functional Analysis, Courant Institute of Mathematical Sciences, Revised reprint of the 1974 original. Courant Lecture Notes in Mathematics 6. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, 2001. xii+145. ISBN: 0-8218-2819-3. [110] P. Plotnikov and J. Toland, Nash-Moser theory for standing water waves, Arch. Ration. Mech. Anal., 159(1):1–83, 2001. [111] J. P¨oschel, On Elliptic Lower Dimensional Tori in Hamiltonian Systems. Math. Z. 202 (1989), 559–608. [112] J. P¨oschel, A KAM-Theorem for some nonlinear PDEs. Ann. Sc. Norm. Pisa 23 (1996), 119–148. [113] J. P¨oschel, Quasi-periodic solutions for a nonlinear wave equation. Comment. Math. Helv. 71(2) (1996), 269–296. [114] J. P¨oschel, On the construction of almost periodic solutions for a nonlinear Schr¨odinger equation. Ergod. Theory Dyn. Syst. 22(5) (2002), 1537–1549. [115] M. Procesi, A normal form for beam and non-local nonlinear Schr¨odinger equations. J. Phys. A 43(43) (2010). [116] M. Procesi and C. Procesi, A normal form for the Schr¨odinger equation with analytic nonlinearities. Comm. Math. Phys. 312 (2012), 501–557. [117] C. Procesi and M. Procesi, A KAM algorithm for the completely resonant nonlinear Schr¨odinger equation. Adv. Math. 272 (2015), 399–470. [118] C. Procesi and M. Procesi, Reducible quasi-periodic solutions for the Non Linear Schr¨odinger equation. Boll. Unione Mat. Ital. 9(2) (2016), 189–236. [119] M. Procesi and X. Xu, Quasi-T¨oplitz Functions in KAM Theorem. SIAM Math. Anal. 45(4) (2013), 2148–2181. ¨ [120] H. R¨ussmann, Uber das Verhalten analytischer Hamiltonscher Differentialgleichungen in der N¨ahe einer Gleichgewichtsl¨osung. Math. Ann. 154, 285–300, 1964. [121] M.B. Sevryuk, Reversible Systems, Lecture Notes in Math, 1211, Springer, 1986. [122] J. Vey, Sur certains syst`emes dynamiques s´eparables, Am. J. Math. 100(3) (1978), 591–614. [123] W.M. Wang, Eigenfunction Localization for the 2D Periodic Schrodinger Operator. Int. Math. Res. Not. 8 (2011), 1804–1838.
Bibliography
355
[124] W.M. Wang, Energy supercritical nonlinear Schr¨odinger equations: quasi-periodic solutions. Duke Math. J. 165(6) (2016), 1129–1192. [125] W.M. Wang, Quasi-periodic solutions for nonlinear Klein–Gordon equations, 2017, arxiv.org/abs/1609.00309. [126] E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory. Comm. Math. Phys. 127 (1990), 479–528. [127] V. Zakharov, Stability of periodic waves of finite amplitude on the surface of a deep fluid. J. Appl. Mech. Tech. Phys. 9(2) (1968), 190–194. [128] E. Zehnder, Generalized implicit function theorems with applications to some small divisors problems I. Comm. Pure Appl. Math. 28 (1975), 91–140. [129] E. Zehnder, Generalized implicit function theorems with applications to some small divisors problems II. Comm. Pure Appl. Math. 29 (1976), 49–111. [130] J. Zhang, M. Gao, and X. Yuan, KAM tori for reversible partial differential equations, Nonlinearity 24 (2011), 1189–1228.
Index Action-angle variables, 58, 71 Admissible frequencies, 12, 52, 73 Airy equation, 35 Approximate right inverse in normal directions, 201, 271 Approximate right inverse of Dn , 280 Approximately invariant torus, 175, 344 Beam equation, 33 Birkhoff matrices, 9, 12, 65, 190 Cartan formula, 345 Chain, 125 Commutator, 18 Composition operator, 108
Interpolation inequalities, 87, 105, 108 Involution, 70, 73, 312 Isotropic torus, 179, 277, 339 Klein–Gordon, 7, 29, 33 Left inverse, 325, 332 Length of a -chain, 125, 132 Lie derivative, 345 Lie expansion, 24, 196, 240 Linear Schr¨odinger equation, 28 Linear wave equation, 1, 28 Liouville 1-form, 73, 179, 337
Main result, 13, 167 Matrix representation of linear operators, Decay norm, 86, 88, 321 79 Diophantine vector, 8, 13, 41, 45, 47, 50, Moser estimates for composition oper58, 59, 73, 133, 339 ator, 108, 181, 184, 186, 277, 278 Eigenfunctions of Sturm–Liouville operMultiplicative potential, 1, 7, 56, 96 ator, 4, 5, 58 Multiscale proposition, 52, 115, 229, 252 Eigenvalues of Schr¨odinger operator, 5, Multiscale step, 122, 149 296 Eigenvalues of self-adjoint matrices, 51, N-bad matrix, 121 153 N-bad parameter, 124 Eigenvalues of Sturm–Liouville operator, N-bad site, 124 28, 30 N-good matrix, 121 Error function, 175, 179, 344 N-good parameter, 124 First Melnikov nonresonance conditions, 9 First approximate solution, 60, 173 Genericity result, 16, 295 Hamiltonian operator, 84 Hamiltonian PDE, 311 Homological equations, 192, 218 Implicit function theorem, 19 Integer part, 18
N-good site, 124 N-regular site, 123 N-singular site, 123 Nash–Moser theorem, 166, 275, 290 Nonlinear Schr¨odinger equation, 6, 7, 28, 33, 57, 315, 318 Nonlinear wave equation, 1, 5–7, 28 Normal variables, 58, 71 Partial quotient, 17 Perturbed KdV equation, 34, 35 Phase space, 77
358
Index
Quadratic Diophantine condition, 9, 51, 115, 128 Quasiperiodic solution, 1–3, 6, 14, 28, 40, 58, 338 Reducibility, 22, 26, 27, 30, 32, 37, 41 Regular site, 121 Reversible PDE, 8, 35, 39, 60, 312 Reversible solution, 165, 312 Reversible vector field, 41, 70, 312 Right inverse, 43, 95 Schr¨odinger operator, 2, 5, 296 Second Melnikov nonresonance conditions, 11, 221 Separation properties of bad sites, 123 Shifted normal frequencies, 62, 65, 197 Shifted tangential frequencies, 171
Singular site, 121 Smoothing operators, 115, 223, 234, 254, 259, 276 Sobolev spaces, 16, 85 Split admissible operator, 62, 63, 199, 248, 249 Sturm–Liouville operator, 28, 30 Symplectic 2-form, 70, 71 Symplectic map, 84 Symplectic matrix, 69, 79 Tame estimates, 88, 105, 108 Tangential variables, 58, 71 Torus embedding, 165, 338 Twist condition, 10 Twist matrix, 9, 65, 172 Water-waves equations, 37
Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
Many partial differential equations (PDEs) arising in physics, such as the nonlinear wave equation and the Schrödinger equation, can be viewed as infinite-dimensional Hamiltonian systems. In the last thirty years, several existence results of time quasiperiodic solutions have been proved adopting a “dynamical systems” point of view. Most of them deal with equations in one space dimension, whereas for multidimensional PDEs a satisfactory picture is still under construction. An updated introduction to the now rich subject of KAM theory for PDEs is provided in the first part of this research monograph. The focus then moves to the nonlinear wave equation, endowed with periodic boundary conditions. The main result of the monograph proves the bifurcation of small amplitude finite-dimensional invariant tori for this equation, in any space dimension. This is a difficult small divisor problem due to complex resonance phenomena between the normal mode frequencies of oscillations. The proof requires various mathematical methods, ranging from Nash–Moser and KAM theory to reduction techniques in Hamiltonian dynamics and multiscale analysis for quasi-periodic linear operators, which are presented in a systematic and self-contained way. Some of the techniques introduced in this monograph have deep connections with those used in Anderson localization theory. This book will be useful to researchers who are interested in small divisor problems, particularly in the setting of Hamiltonian PDEs, and who wish to get acquainted with recent developments in the field.
ISBN 978-3-03719-211-5
https://ems.press
Monographs / Berti/Bolle | Font: Rotis Semi Sans | Farben: Pantone 116, Pantone 287 | RB 22 mm
Massimiliano Berti and Philippe Bolle Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus
Massimiliano Berti Philippe Bolle
Monographs in Mathematics
Massimiliano Berti Philippe Bolle
Quasi-Periodic Solutions of Nonlinear Wave Equations on the d-Dimensional Torus