381 60 6MB
English Pages 175
AN INTRODUCTION TO STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS
John B. WALSH
The general p r o b l e m governed
by a p a r t i a l
perturbed
randomly,
perhaps
in time?
u(x,t)
is the p o s i t i o n
sandstorm
Suppose
differential
evolve
Think
in c a l m air u(x,t)
is this.
one is given a p h y s i c a l
equation.
by some
for example
Suppose
of one of the strings
would
satisfy the w a v e
Let W
x and time t.
equation
xt
represent
The number
the i n t e n s i t y
of grains h i t t i n g
time,
so that,
subtracting
a mean
white
noise,
and the
final e q u a t i o n Utt(x,t)
w h e r e W is a white noise twoparameter
white
of o r d i n a r y derivatives
though
higher
of this
= Uxx(X,t)
However,
by a s u c c e s s i o n
if a of
at the p o i n t
at another
point
and and
W m a y be a p p r o x i m a t e d
by a
+ W(x,t),
equation
or,
in other words,
 not s u r p r i s i n g
in it exist.
However,
in this
nondifferentiable,
the s o l u t i o n

fails:
This
is one of the t e c h n i c a l
involving
.
is
differential
this
most
intensity,
stochastic
dimensions
distributionvalued
hitting
in both time and space,
and then show that
continuous,
xx
a
noise.
One p e c u l i a r i t y
In
= u
tt
If
t, t h e n
the string at a given p o i n t
of the number
after
left outdoors.
of the b o m b a r d m e n t
independent
equation,
u
is then
How does it
at the p o i n t x and time
time will be largely
partial
carelessly
should b l o w up, the string w o u l d be b o m b a r d e d
sand grains.
behavior
that the system
sort of a white noise. of a guitar
system
 is that none of t h e
one may rewrite
form there
is a solution
say,
rather
turns out to be a distribution, barriers
in the subject:
and this has g e n e r a t e d
extensive
it as an integral which
is a
function.
w i t h a drumhead,
solutions,
a fairly
equations
in v i e w of the
use of functional
than a string  even not a function.
one m u s t
deal w i t h
a n u m b e r of a p p r o a c h e s ,
analysis.
267
Our aim is to study a c e r t a i n number of such stochastic p a r t i a l d i f f e r e n t i a l equations, to see how they arise, to see how their solutions behave, and to examine some techniques of solution.
We shall c o n c e n t r a t e
m o r e on p a r a b o l i c equations than on h y p e r b o l i c or elliptic, and on equations in which the p e r t u r b a t i o n comes from something akin to white noise. In particular, one class we shall study in detail arises from systems of b r a n c h i n g diffusions.
These lead to linear p a r a b o l i c equations whose
solutions are g e n e r a l i z e d O r n s t e i n  U h l e n b e c k processes, and include those studied by Ito, Holley and Stoock,
Dawson, and others.
Another r e l a t e d class
of e q u a t i o n s comes from certain n e u r o p h y s i o l o g i c a l models. Our p o i n t of view is more r e a l  v a r i a b l e o r i e n t e d than the usual theory, and, we hope, slightly more intuitive.
We regard white noise W as a measure
on E u c l i d e a n space, W(dx, dr), and c o n s t r u c t stochastic integrals of the form f f ( x , t ) d W directly,
f o l l o w i n g Ito's original construction.
This is a
t w o  p a r a m e t e r integral, but it is a p a r t i c u l a r l y simple one, known in t w o  p a r a m e t e r theory as a " w e a k l y  a d a p t e d integral".
We generalize it to
include integrals with respect to m a r t i n g a l e measures,
and solve the
equations in terms of these integrals. We will need a certain amount of machinery:
n u c l e a r spaces, some
e l e m e n t a r y Sobolev space theory, and weak c o n v e r g e n c e of stochastic processes with values in Schwartz space.
We develop this as we need it.
For instance, we treat SPDE's in one space dimension in Chapter 3, as soon as we have d e v e l o p e d the integral, but solutions in h i g h e r d i m e n s i o n s are generally Schwartz distributions,
so we develop some e l e m e n t a r y
d i s t r i b u t i o n theory in Chapter 4 before t r e a t i n g higher dimensional e q u a t i o n s in C h a p t e r 5.
In the same way, we treat w e a k c o n v e r g e n c e of =S'valued
processes in Chapter 6 before t r e a t i n g the limits of infinite p a r t i c l e systems and the Brownian density p r o c e s s in Chapter 8. After c o m p a r i n g the small p a r t of the subject we can cover w i t h the m u c h larger mass we can't, we had a m o m e n t a r y desire to retitle our notes:
"An
Introduction to an Introduction to Stochastic Partial Differential Equations";
268
w h i c h means
that the i n t r o d u c t i o n
to the notes,
w o u l d be the i n t r o d u c t i o n
to "An I n t r o d u c t i o n
to begin w i t h an infinite
regression.
an introduction,
not a survey.
on the subject,
w h a t we do cover
knows?
even p h y s i c a l l y
Perhaps
Let's
While we w i l l
which
... ", but no.
It is not g o o d
just keep in m i n d that this
is
forego m u c h of the recent w o r k
is m a t h e m a t i c a l l y useful.
you are now reading,
interesting
and, w h o
CHAPTER ONE WHITE NOISE AND THE BROWNIAN Let random
(E,~,v)
be a ~  f i n i t e
measure
space.
set f u n c t i o n W on the sets A e =E of finite (i)
W(A)
is a N ( 0 , v ( A ) )
(ii)
if A ~ W(A ~
In m o s t To see that sets of ~ Gaussian
cases,
: {W(A), process
are
exists,
space
and
< ~}.
From
(i) and
C(A,B)
= E{W(A)
W(B)}
t h e o r e m on G a u s s i a n
process
measure•
i n d e x e d by the
(ii) this m u s t be a m e a n  z e r o
C given by
process
and v w i l l be L e b e s g u e
think of it as a G a u s s i a n
function
a Gaussian
such that
independent
covariance
By a general there exists
vmeasure
+ W(B).
A £ ~, v(A)
with
and W(B)
E w i l l be a E u c l i d e a n
such a p r o c e s s
A w h i t e n o i s e b a s e d on v is a
randc~a v a r i a b l e ;
B = ~, then W(A) B) = W(A)
SHEET
= v(A ~
processes,
B).
if C is p o s i t i v e
w i t h m e a n zero a n d c o v a r i a n c e
definite,
function
C.
Now
let
A I , • ..,A n be in E= and let el,. ',a n be real numbers. a i a j C(Ai,A j) =
~ aia j fI A (x) I A (x)dx i,j i j
i,j
= f(~ a i IA.( x))2~ h 0. i Thus C is a p o s i t i v e mean
zero G a u s s i a n
definite,
process
so that there
{W(A)}
on (Q,~,P)
exists
a probability
space
such that W s a t i s f i e s
(Q,~,P)
(i) and
and a
(ii)
above• There are other ways of d e f i n i n g w h i t e noise• Lebesgue motion".
measure,
it is often
Such a d e s c r i p t i o n
the B r o w n i a n
sheet
rather
Brownian
is p o s s i b l e
to the case
If t =
is
G a u s s i a n process.
in h i g h e r
as the "derivative dimensions
{Wt,
If s =
too,
of B r o w n i a n
but it involves
motion. E = R n+ = {(tl,...,tn):
(t1,•..,t n) £ R n+ , let
s h e e t o n R~ is the p r o c e s s
a meanzero function
measure•
informally
than B r o w n i a n
Let us specialize v = Lebesgue
described
In case E = R a n d v =
t £
R~}
(0,t]
defined
(s1,.o°,s n) and t =
=
ti>0 ,_
i=1,...,n}
(0,tl]×...×(0,tn].
by W t = W { ( 0 , t ] } . (tl,...,tn),
and The
This is
its c o v a r i a n c e
270
(1.1)
E{WsWt} If we r e g a r d
Notice W(R)
that
we can
is g i v e n
can
= W
be c o m p u t e d
approximated
recover
b y the
W((u,v),(s,t)]
W(A)
st
the w h i t e
usual
formula
 W
 W
sv
finite
Interestingly, J.
Kitagawa,
in
an idea of w h a t curves
in
R~ I).
Brownian C(t,t')
this
the
uv
Brownian
process
for
R n+ f r o m
Wt,
for
if R is a r e c t a n g l e ,
(if n = 2 a n d 0 < u < s, 0 < v < t, If A is a f i n i t e
).
Borel
of r e c t a n g l e s
in o r d e r
W vanishes
in
function.
A
sheet
was
to do a n a l y s i s
looks
like,
a way
first
consider
n = 2, v = L e b e s g u e
on the axes.
can be
0 . by a statistician,
in c o n t i n u o u s
its b e h a v i o r
time.
along
To get
some
measure.
If s = s
it is a m e a n  z e r o
measure
W(A)
that
introduced
of v a r i a n c e
let's
of r e c t a n g l e s ,
set A of f i n i t e
in s u c h
n
union
 W ( A n ) ) 2} = v ( A  A n) + v ( A n  A) +
, in the c a s e
motion, = s
1951
noise
 W
ut
W t is its d i s t r i b u t i o n
and a general
unions
E{(W(A)
(slAtl)'''(Sn Atn )
as a m e a s u r e ,
by a d d i t i v i t y ,
by
=
Gaussian
> 0 is f i x e d ,
o
process
with
{ W s t" t>0} o
covarianoe
is a
function
(tat').
o
2).
Along
the h y p e r b o l a
st =
I, let
Xt = W t t" e ,e Then stationary
{Xt,
 ~ < t < ~}
is an O r n s t e i n  U h l e n b e c k
process,
Gaussian
process
with
I, a n d c o v a r i a n c e
C(s,t)
mean
= E{W
zero,
s e
3). process
the
of i n d e p e n d e n t
increments paths
Along
in
are
stationary.
although
The
Just
as
same
in one p a r a m e t e r ,
transformations
which
M t = Wtt
is a m a r t i n g a l e ,
it is not a B r o w n i a n
is t r u e
take
there one
I Ast = ~
Scaling: Cst
Inversion:
= st
W1 1 ; s
~anslation
function
s W t t } = e  l S  t l " e ,e
the p r o c e s s
increments,
a strictly
if we c o n s i d e r
and even
motion, W along
by
(So,to):
t
Est = Wso+S,to+t
are
Brownian
scaling, sheet
inversion,
into
increasin@
and
another.
Wa2s,b2 t DSt
= s W1 s
 Wso+S,t °  Wso,to+t
+ Ws t oo
a
for t h e s e
2 R+ • 4).
translation
not
diagonal
,e
variance
i.e.
271 T h e n A, C, D, and E are B r o w n i a n sheets, and moreover, E is independent of F*s t = ~ {Wuv o o
:
u
t = t + t = %(A,B)+
%(A,C).
Moreover, by the general theory, IQt(A,B)I ! Q t (A,A) I/2 e t (B,B)I/2" A set A x B x (s,t] C
E x E x R+ will be called a rectangle.
Define a
set function Q on rectangles by
e(A x H × (s,t]) = Qt(A,B~  QsCA,B~, and e x t e n d Q by a d d i t i v i t y to finite disjoint unions of rectangles, A i x B i x (si,ti] are disjoint,
i =
1,...,n, set
i.e. if
290
n
(2.2)
n
Q( U A i x Bi x (si,ti]) = [ IQtl(Ai,Bi)  Qs (Ai'Bi))" i=1 i=1 " i
Exercise 2.3.
Verify that Q is welldefined, i.e. if
n m A = i=IU Ai× B i x (si,ti ] = jU__.=IA"x 3 the same value for Q(A) i n ( 2 . 5 ) . If a 1,...,a
n
e R
(2.3)
B'x 3
(s3't3]' each representation gives
(Hint:
use biadditivity.)
and if A I,...,A 6 E are disjoint, then for any s < t n ~n n [ 7 aia j Q(A i × Aj ×(s,t]) > 0, 9=I 9=I
for the sum is =
[ aiajI t  s) i,j
=
~ 0. i 1
A signed measure K(dx dy ds) on E x E × B is positive definite
if for each bounded measurable function f for which the integral makes sense, (2.4)
f f(x,s)f(y,s)K(dxdyds) ~ 0 EXEXR + For such a positive definite signed measure K, define (f'g)K =
f f(x,s)g(y,s)K(dxdyds). EXExR +
Note that (f'f)K ~ 0 by (2.4).
Exercise 2.4.
Suppose K is symmetric in x and y.
Prove Schwartz' and
Minkowski's inequalities _,I/2, ,I/2 (f'g)K ~ (f'r;K ~g'g;K •
and
,I/2
(f+g' ftg;K
_,I/2
~ (f'z;K
I/2 + (g'g)K
It is not always possible to extend Q to a measure on E x E × B, where B = Borel sets on
R+, as the example at the end of the chapter shows.
are led to the following definition,
We
291
DEFINITION. ofinite
A martingale
measure
K(A,~),
m e a s u r e M is w o r t h y A ~
E x E x B, ~ E Q,
(i)
K is p o s i t i v e
(ii)
for f i x e d A, B,
(iii)
for all n,
(iv)
for any r e c t a n g l e A,
definite
E{K(E
n
x E
simply
replace
special
t > 0} is p r e d i c t a b l e ; < ~;
of M.
We w i l l
cases m e n t i o n e d
and those w i t h n u c l e a r confidence
in x a n d y;
is no restriction.
it by K(dx dy ds) + K(dy dx ds). on M.
a random
IQ(A)I ! K(A).
that K be s y m m e t r i c
is a s t r o n g c o n d i t i o n important
x [0,T])}
n
exists
such t h a t
and s y m m e t r i c
{K(A x B x (0,t]),
We call K the d o m i n a t i n g m e a s u r e The requirement
if there
covariance
Apart
If not,
from this,
show b e l o w that it h o l d s
above:
both orthogonal
are worthy.
that we will have no dealings
In fact,
with unworthy
we
however
it
for the two
martingale
measures
we can state w i t h measures
in these
notes. If M is w o r t h y is a p o s i t i v e
set function.
restrict o u r s e l v e s is f i n i t e l y set f u n c t i o n measure. measure
with c o v a r i a t i o n
for a.e. ~.
dominated
In particular,
on E x E x ~, and the total
Orthogonal = {(x,x):
PROPOSITION supported
PROOF.
measures x ~ E},
2.1.
by A(E)
Q(A x B ×
If M is o r t h o g o n a l IQI[ E x E  A(E)) vanishes
2K,
(2.3),
and white
of E x E x B upon w h i c h
and hence
A worthy martingale
are easily
Q(,~) additive to a
to a signed
of Q s a t i s f i e s
Q w i l l be p o s i t i v e
be the d i a g o n a l
finitely
can be e x t e n d e d
can be e x t e n d e d
variation
noises
K, then K + Q
so that we can first
T h e n K + Q is a p o s i t i v e
for a.e. ~ Q(o,~)
By
measure
E is separable,
subalgebra
by the m e a s u r e
for all A E E x E x B.
d(E)
The ofield
to a c o u n t a b l e
additive
Q and d o m i n a t i n g
IQI(A) ! K(A)
definite. characterized.
Let
of E.
measure
is o r t h o g o n a l
iff Q is
x R+
[0,t]) = t
and A ~ B = ~, this v a n i s h e s x R+] = 0, i.e.
supp Q c d(E)
hence x R+
for all d i s j o i n t A and B, M is e v i d e n t l y
.
Conversely,
orthogonal.
if this Q.E.D.
292
S T O C H A S T I C INTEGRALS
We are only going to do the L 2  t h e o r y here  the bare bones, so to speak.
It is p o s s i b l e to e x t e n d our integrals further, but since w e w o n ' t
n e e d the extensions in this course, we will leave t h e m to our readers. Let M be a worthy m a r t i n g a l e m e a s u r e on the L u s i n space
(E,E), a n d let
QM a n d K M be its c o v a r i a t i o n and d o m i n a t i n g m e a s u r e s respectively.
Our
d e f i n i t i o n of the stochastic integral may look u n f a m i l i a r at first, but we are m e r e l y f o l l o w i n g Ito's c o n s t r u c t i o n in a different setting. In the classical case, one constructs the s t o c h a s t i c integral as a process rather than as a random variable.
That is, one constructs
t {f f dB, t ~ 0} s i m u l t a n e o u s l y for all t; one can then say that the i n t e g r a l 0 is a martingale,
for instance.
is "martingale measure".
The analogue of "martingale" in our s e t t i n g
Accordingly, we will define our stochastic i n t e g r a l
as a m a r t i n g a l e measure. R e c a l l that we are r e s t r i c t i n g ourselves to a finite time interval {0,T] and to one of the En, so that M is finite. the integral for e l e m e n t a r y functions,
As usual, we will first define
then for simple functions,
and then
for all functions in a certain class by a functional c o m p l e t i o n argument.
DEFINITION.
A f u n c t i o n f(x,s,~)
(2.5)
is e l e m e n t a r ~ if it is of the f o r m
f(x,s,~) = X(~)I(a,b](S)
w h e r e 0 _< a < t, X is b o u n d e d and ~ameasurable, it is a finite sum of e l e m e n t a r y functions,
IA(X), and A 6 =E"
f is sim~le ' if
we denote the class of simple
functions by S.
DEFINITION. by S.
The ~ r e d i c t a b l e G  f i e l d P on Q x E × R + is the G  f i e l d generated
A f u n c t i o n is p r e d i c t a b l e if it is P  m e a s u r a b l e . We define a n o r m
II
IIM on the p r e d i c t a b l e functions by
llf IIM = E{~Ifl, Ifl~K} I/2
293
Note that we have used the absolute
value of f to define
Let ~M be the class of all predictable
PROPOSITION
2.2.
I}f IIM, so that
f for which
llf IIM < ~.
Let f E ~M and let A = {(x,s):If(x,s) I > £}.
Then
E × [0,T]>} i T I }{f{{M E{K(E x E x [0,T])}
E{K(A×
PROF.
~ E t + t = ta.s.
Indeed, t = t = t + t + 2 t, and the last t e r m v a n i s h e s since M is o r t h o g o n a l . 2°
A C B => t ~ t S
3°
< t 
=>
< s 
~ is a G  f i n i t e measure:
S
it must be Q  f i n i t e since M T is, a n d
a d d i t i v i t y follows by t a k i n g expectations in I". The i n c r e a s i n g p r o c e s s t is f i n i t e l y a d d i t i v e for each t by I°, but it is better than that.
It is p o s s i b l e to construct a v e r s i o n w h i c h is a
m e a s u r e in A for each t.
T H E O R E M 2.7. measure.
Let {Mt(A) , ~t' 0 < t < T, A E =E} be an o r t h o g o n a l m a r t i n g a l e
Then there exists a family {vt(.), 0 < t < T} of random Q  f i n i t e
m e a s u r e s on (E,E) such that (i)
{vt, 0 < t < T} is predictable;
(ii)
for all A 6 ~, t ÷ vt(A) is r i g h t  c o n t i n u o u s and increasing;
(iii) P{vt(A) = t} =
PROOF.
I all t ~ 0, A ~ ~.
we can reduce this to the case E C R, for E is h o m e o m o r p h i c to a
Borel set F C R.
Let h: E + F be the h o m e o m o r p h i s m ,
Mt(A) = Mt(hI(A)),
~(A) = ~(h1(A)).
and define
If we find a ~t s a t i s f y i n g the
c o n c l u s i o n s of the t h e o r e m a n d if ~ t ( R  F) = 0, then v t = vt° h satisfies the theorem.
Thus we may assume E is a Borel subset of
Since M is Qfinite, are compact K n C all n.
R.
there exist E n + E for w h i c h ~ ( E n ) < ~"
E n such that u(E n K n) < 2 n.
We may also assume K n C
It is then enough to p r o v e the t h e o r e m for each K . n
assume E is compact in
R a n d ~(E)
Define Ft(x) = t' ~ < x < ~.
Then t h e r e
Thus we may
Kn+ I
300
Then
x t, x', t'~ Q}.
We claim that F
t
is the distribution
This will be the function
of a
measure. c)
t I ~ 1 = = . n
•
C H A P T E R THREE EQUATIONS
IN ONE SPACE D I M E N S I O N
We are going to look at stochastic white noise and similar processes.
x C
R d, d ~ 2.
invariably, continuous.
and x is the space variable.
between t~e case where x is onedimensional
In the former case the solutions
realvalued
functions.
On the other hand,
are only generalized
equations
The solutions will be functions
x and t, where t is the time variable to be a big difference
partial differential
are typically,
R d, the solutions
of the variables There turns out and the case where
though not
They will be nondifferentiable,
in
driven b y
but are usually
are no longer functions,
but
functions.
We will need some knowledge of Schwartz d ~ 2, but we can treat some e x ~ p l e s do that in this chapter,
distributions
in one dimension by hand,
to handle the case so to speak.
and give a somewhat more general treatment
We will
later, when we
treat the case d > 2.
wA~
TH~
EQUATION
Let us return to the wave equation of Chapter one: I
(3.1)
~2V ~t 2
~2V ~x 2
+ W
t > 0, x E
V(x,0) = 0,
x e
R
~(x,0)
x E
R.
= 0,
White noise is so rough that solution will not be differentiable. equation which will be solvable.
(3.1) has no solution: However,
we can rewrite
R ×
any candidate
[0,T], where T > 0 is fixed.
it as an integral
Assume
of compact support and integrate
for the sake of argument
V ~ C (2) . T f f OR
for a
This is called a weak form of the equation.
We first m u l t i p l y by a C~ function ~(x,t) over
R;
"
[Vtt(x,t)V(x,t)]~(x,t)~dt
T = I I OR
~(x,t)W(x,t)~dt.
that
309
Integrate by p a r t s twice on the lefthand side.
N o w # is of compact support in x,
but it may not vanish at t = 0 and t = T, so we will get some b o u n d a r y terms: T
f f vcx,t)c%t(x,t)~=cx,t)1~dt + f t~(x,')vt(x,')10%cx,')vcx,')l~1~ 0
R
R T
= I f ,(x,t)~Cx,t)~dt 0
R
If ~(x,T) = ~t(x,T) = 0, the b o u n d a r y terms will drop out b e c a u s e of the initial conditions.
DEFINITION.
This leads us to the following.
We say that V is a w e a k solution of (3.1) p r o v i d i n g that V(x,t) is
l o c a l l y i n t e g r a b l e and that for all T > 0 and all C~ functions ~(x,t) of c o m p a c t support for w h i c h ~(x,T) = ~t(x,T) = 0, V x , T f ~ V(x,t)[~tt(x,t) 0 R
(3.2)
we have
 ~xx(X,t)]dxdt =
The above a r g u m e n t is a little unsatisfying; satisfies
it indicates that if V
(3.1) in some sense, it should satisfy (3.2), w h i l e it is really the
c o n v e r s e we want.
We leave it as an exercise to verify that if we replace W b y a
smooth function f in (3.1) and (3.2), and if V satisfies it does in fact satisfy
T H E O R E M 3.1. I ^ V(x,t) = ~ W
PROOF.
T ~ f ~dW. 0 R
(3.2) and is in C (2), t h e n
(3.1).
There exists a unique continuous solution to (3.2), n a m e l y (tx, t+x) ...... , w h e r e W is the m o d i f i e d Brownian sheet of Chapter One.
Uniqueness:
if V I and V 2 are both c o n t i n u o u s and satisfy
(3.2), then t h e i r
d i f f e r e n c e U = V 2  V I satisfies ffU(x,t)[~tt(x,t) Let f(x,t) be a ~ exists a ~ E ~ C(x,t; Xo,to)
 ~xx(X,t)]dxdt = 0
f u n c t i o n of compact support in
R x (0,T).
Notice that there
w i t h #(x,T) = ~t(x,T) = 0 such that ~tt  ~xx = f" is the indicator f u n c t i o n of the cone
{(x,t): t0} Now V satisfies
(33~ /f
[ ff
(3.2) iff the following vanishes:
½~u,v;u,v>~Cdudv~]~$uv(U',v~du'dv
ff
{u'+v'>0} { u+v>0}
$Cu,v,~(dudv~
{ u+v>0}
We can interchange the order of integration by the stochastic Fubini's t h e o r e m of Chapter Two :
=
ff {u+v>o}
If f ~ u v v
(u°,v°)du'dv
'

~(u,v)
(dudv).
u
But the term in b r a c k e t s v a n i s h e s identically,
for ~ has compact support. QED
The literature of t w o  p a r a m e t e r p r o c e s s e s contains studies of s t o c h a s t i c d i f f e r e n t i a l equations of the form (3.4)
dV(u,v) = f(V)dW(u,v) + g ( V ) d u d v
where V and W are two p a r a m e t e r processes,
and dV and dW represent t w o  d i m e n s i o n a l
increments, w h i c h we w o u l d write V(dudv) and W(dudv).
These equations rotate into
the n o n  l i n e a r wave e q u a t i o n Vtt(x,t) = Vxx (x,t) + f(V)W(x,t) + g(V) in the region {(x,t): t > 0, t 6).
1  P3 and
Then
tL ftfL •F (x,t) < c(f0f0Gs2eq(x,y~?p/2q ~{l~(y,s)vn1(y,s)IP}G(1e)e(~,~) ayas 
0
ts
0
In this case 2eq < 3, so the first factor is bounded;
by (3.13) the expression
is
t < C/
Hn_1(s)
(ts}ads,
0 1 where a = ~ (1+epp)
> I, and C is a constant.
Thus t (3.14)
Hn(t) < Cf
Hn_1(s)(tS)ads,
t >__ 0
0 for some a > I and C > 0.
Notice that if Hn_ I is bounded on an interval
[0,T], so
is H . n t
Ho(t) I and constant CI,
t hn(t) ~ Clf0hn_1(s)(ts)ads,
n =
1,2, . . . .
Then there is a constant C and an integer k ) I such that for each n ~ t e
such that h 0
I and
[0,T], t (ts) hn+mk(t) ~ cmf hn(S) ~ l a s , 0
(3.15)
m = 1,2 . . . . .
Let us accept the lemma for the moment. that for each n,
~ m=0
does
I/i)
~ n=0
(Hn(t))
(Hn+mk(t)) 1/p converges
Thus vn(x,t)
converges
It applies to the Hn, and implies
uniformly
on compacts,
and therefore
in L p, and the convergence
so
is u n i f o r m
317
in [0,L] x [0,T] for any T > 0.
In particular,
V n converges
in L 2.
Let V(x,t) = lim
vn(x,t). It remains to show that V satisfies that V satisfies
(3.9).
(3.11)  this follows from (3.12).
show that (3.11) implies
(Note that it is easy to show However, we would still have to
(3.9), so we may as well show (3.9) directly.)
Consider L f (vn(x,t)V0(x))~(x)dx 0
(3.16)
t L  f f vn(x,s)[~"(x)~(x)]dx ds 00 t L n  f f f(V 1(y,s))~(y)W(dyds). 00
By (3.12) this is L t L
= f f / f(vn1(y,s))Gt_s (x,y)W(dyds) % (x)dx 000 L
L
+ I ( ] Gt(x,y)Vo(Y)dY  Vo(X)]4)(x)dx 0 0 t L L I ~ [] Gs(X,y)V0(y)dy
0 0
u L + f I f(vn1(y,s))Gu_s(X,y)W(dyds)](*"(x)*(x))dxdu
0
0 0
 ftfLf(vn1 (y,s))~ (y)W(dyds). 00 Integrate first over x and collect terms: t L
= f f f(vn1(y,s))[Gt_s(~,y) 00
t  f Gu_s(~"~,y)ds s
 ~(y)]W(dyds)
L t  f [Gt(4),Y)  *(y)  I Gu(*"*,y)du]V0(Y)dY. 0 0 But this equals zero since both terms in square brackets vanish by (3.8). (3.16) vanishes
for each n.
We claim it vanishes
Let n ÷ ~ in (3.16).
vn(x,s) +V(x,s)
each T > 0, and, thanks to the Lipschitz ~ifo~ly
Thus
in the limit too.
in L 2, ~ i f o ~ l y
conditions,
f(vn1(y,s))
in [0,L]x[0,T]
for
also converges
in L 2 to f(V(y,s)). It follows that the first two integrals
does the stochastic
inte~al,
for
tL E{(f f (f(V(y,s)) 00 t L < I< / ~ E{(V(y,s)
0 0
in (3.16) c o n v e r ~
 f(vn1(y,s))~(y)W(dyds))2}
 vn1(y,s)) 2} ~(y)dyds
as n ÷ ~.
So
318
which tends to zero. by V.
It follows that
This gives us
(3.16)
still vanishes
if we replace V n a n d V n1
(3.9). Q.E.D.
We must now prove the lemma.
PROOF
(of Lemma 3.3).
If a > 0 take k = I and C = C I.
If I < a < 0,
2 t ' t hn(t) < Clf0hn_2(u)(~ ('ts)a(su)ads)du • u If a = I + E, the inner integral , 2 ~1e 2[t~) [
is bounded above by I/2(tu) 0
dv 1e
< 4  ? (tu)2E1
v
so t hn(t ) < C1f0hn_1(s ) _
t ds 4 2 fohn 2(s) (t_s)1_ £ ! ~ C I _
If 2e ~ I we stop and take k = 2 and C = ~4 C 2I.
Otherwise
t
16 4 <  ~ C I f hn_4 (s) E 0

until we get
(ts) to a positive power.
ds (t_s)l_2£
we continue
ds (ts) I4£
When this happens,
we have
t hn(t) ~ C f
hn_k(S)ds. 0
But now
(3.151 follows
from this by induction.
Q.E.D.
In many cases the initial value V0(x) V(x,t)
will be bounded in L p for all p.
continuous
COROLLARY
process,
3.4.
(x,t) ÷ V(x,t)
PROOF.
and, even better,
Suppose
that V0(x)
is deterministic,
We can then show that V is actually
estimate
its modulus
is LPbounded
is a Holder continuous
a
of continuity.
for all p > 0.
Then for a.e. ~,
function with exponent ~  e, for any E > 0.
A glance at the series expansion Gt(x,y)
in which case
of G t shows that it can be written
= gt(x,y)
+ Ht(x,y) 2 
where
gt(x,y ) = (4~t)I/2e
(yx)
4t

t
•
Ht(x,y)
is a smooth function of (t,x,y) on (0,L) x (0,L) x (~, =), and H vanishes
t < 0.
By (3.11)
if
319
L V(x,t)
=
f 0
t L + ~ f f(V(y,s))Ht_s(X,y)W(dyds) 0 0
Vo(Y)Gt(x,y)dy t
L f f(V(y,s))gt_s(X,y)W(dyds)" 0
+ f 0
The first term on the right hand side is easily seen to be a smooth function of (x,t) on (0,L) x (0,~). function
H.
The second term is basically a convolution
It can also be shown to be smooth;
Denote the third term by U(x,t)° by e s t i m a t i n g the moments E{IU(x+h,
we leave the details to the reader.
We will show that U is ~61der continuous
of its increments
and using Corollary
We will estimate
Replacing
1.4.
Now
t+k)  U(x,t)In} I/n ~ E{IU(x+h , t+k)  U(x, t+k) In} I/n + E{IU(x,t+k)
Burkholder's
of W with a smooth
inequality
the two terms separately.
to bound the moments
 U(x,t)In} I/n. The basic idea is to use
of each of the stochastic
integrals.
t+k by t, we see that t
E{l~Cx+h,t)  ~(x,t~ln} _ 0.
Then there is a n e i g h b o r h o o d G of zero such that
If(x) I < 6.
Jl IJ • n
(We identify H 0 w i t h its dual H 0, but we do
Let us give E the toplogy d e t e r m i n e d by the ]t 11 . n
be
norm
In fact we have: •' ' 3
Let E '
larger
I1 II . n
T h e n H_n is the dual of Hn.
f ~ H
note that
if m < n for, since Jf llm < J1 iln, any linear functional on E w h i c h is
continuous
the
Meanwhile,
f
e E'.
Thus there is
For 6 > 0, if ~xll < £6, then n
This implies that HfII_n ~ I/e, i.e. f e H_n.
Conversely,
if
, it is a linear functional on E, and it is continuous r e l a t i v e to H ~ , a n d n
hence continuous in the t o p o l o g y of E. Note:
The a r g u m e n t above also proves the f o l l o w i n g more general statement:
a linear map of E into a metric space.
Let F be
Then F is continuous iff it is continuous in
one of the norms , II . n W e give E' the strong topology:
a set A G E is b o u n d e d if it is b o u n d e d in
each n o r m il l]n, i.e. if {rlxiln, xeA} is a b o u n d e d set for each n.
Define a s e m i  n o r m
pA(f) = sup{If(x) I : xeA}. The strong t o p o l o g y is g e n e r a t e d by the seminorms {PA : A ~ E is bounded}. not in g e n e r a l
norr~ble,
but
its
topology
is
compatible
with
the
metric
d(x,y) = ~ 2n(I + flyXlln)IIyXXn , n and we can speak of the completeness of E.
If E is complete, then E = ~ n
H . n
N o w E is
332
(Clearly
E C
/'% H n, a n d n
~IXXnfl
if x e
< 2 n =>
n
where H
n
H
n
~ n
H_n~
is a H i l b e r t
space
is d u a l
to H , a n d n
We m a y of the
Exercise
4.1.
in H
(Hint:
m
...3
Suppose
I]
H
m
the
H0~
following
fl
II n
it is t o t a l l y
d(X,Xn)
H2~
Then
that
a n d we h a v e N n
Hn = E
fl II , ~ < n < ~, n
n > m such
n
x n e E such
< 2 n+1. )
...D
explicitly H
is
s~ace,
HI~
exists
of t h e s p a c e s
< HS
Thus
a nuclear
to the n o r m
for a l l m t h e r e
properties
show
H_I~
relative
for e a c h n t h e r e
m < n.
it is c a l l e d
not often use
fundamental
then
Ilxx II < 2 n n m
If E is c o m p l e t e , E' =
~ Hn, n
that
E is d e n s e
II II m
in t h e
< HS
sequel,
fln
but
n
in Hn,
.
it is o n e
.
the c l o s e d
unit
ball
in H
is c o m p a c t
n
bounded.)
REGULARIZATION
L e t E be a n u c l e a r random
linear
functional
if,
space
as a b o v e .
for e a c h
x,y e E a n d a,b,
X ( a x + by) = aX(x)
THEOREM
4.1.
probability
H
n
a.s°
Let X be a r a n d o m
linear
in , I~ for s o m e m. m In p a r t i c u l a r ,
Convergence
If
+ bX(y)
functional
11 II m
< HS
X has a v e r s i o n
in p r o b a b i l i t y
A stochastic
with
E
process
xeE}
is a
R,
a.s.
on E w h i c h
is c o n t i n u o u s
H ~i , t h e n X has n values
{X(x),
a version
in
which
is
in
in E'.
is m e t r i z a b l e ,
being
compatible
with
the metric
def
lu x(x)nl If X is c o n t i n u o u s s o m e m by o u r note.
COROLLARY
4.2.
probability
in p r o b a b i l i t y There
Then
on E,
E{JX(x}I^I}. it is c o n t i n u o u s
exists n such that
Let X be a r a n d o m
on E.
=
X has
linear
a version
~ II < m HS
functional with values
in p r o b a b i l i t y
~ H . n
which in E'.
in
T h u s we h a v e
is c o n t i n u o u s
in
II 11 f o r m
333
PROOF
(of Theorem 4.1).
We will first show that
Let (e k) be a CONS in (E,, ITn).
~X(ek)2 < ~. For E > 0 there exists 6 > 0 such that IIIX(x)lll < e whenever
llxU < 6. m
We
claim that Re E{e ix(x)} > I  2E  2E62flX"2 • 
m
; ~ Indeed, the lefthand side is greater than 1  ~I EtX2(x)a41,
and if llxllm _< 6,
E{X2(x)A4} ! 4E{IX(x)IAI} ~ 4e, while if nxn
m
> 6,
E{X2(x)A4}
< llxl1262E{X2(6x/NXltm)~4} < 4 E 62nxll 2. 
m

m
Let us continue the trickery by letting YI,Y2,...
be iid N(0,O 2) r.v.
N
independent of X, and set x =
~ Ykek . k=l
Then
N Re E{e iX(x)} = E{ReE{exp[i [ YkX(ek )]Ix}}. I But if X is given, [YkX(ek ) is conditionally a mean zero Gaussian r.v. with variance ~2~X2(ek) , and the above expectation
is its characteristic
function:
2N  ~2 { [ X2(ek)} = E{e k=1 }° On the other hand,
E{Re E{e
it also equals
iTYkx~ek)
IY}} > I
= I 2e  262E
_
2e 
262~E{,x, 2}
N [ E{YjY k} < ej,ek> m j ,k=1
= 1  2e  26 2 e ~
2
N 2 7 "ekllm" k=1
Thus 2 N _ ~_ ~ X2(ek ) 2 E{e k=l } > I  2e  262£C 2 _
N 7 "ek H2 m ° k=1
Let N ÷ ~ and note that the last stun is bounded since ~ "m < n " n . HS to to see that P{ Let QI = {~: ~ X2(~k '~) < "}" k
~ X2(ek ) < ~} h I  2e. k=1 Then P{QI} = I.
Define
Next let 2
÷ 0
334
i Y(x,~)
~ nX(ek )
if ~ £ QI"
0
if ~ e ~  QI"
=
The sum is finite by the Schwartz inequality, Y E H
n
Moreover,
so Y is welldefined.
with n o r m
,YII_n = [ y2(e k) = ~ X 2 ( e k ) < k k N
Finally, P{Y(x) = X(x)} =
I,
x ~ E.
Indeed,
let ~
X(x N) = Y(x N) on QI' and nXXNll m ~ HXXNll n ÷ 0.
= [ nek • k=1
Clearly
Thus
Y(x) = lim Y(x N) = lim X(x N) = X(x). Note:
We have f o l l o w e d some notes of Ito in this proof.
The tricks are due to
S a z a n o v and Yamazaki.
EXAMPLES
Let us see w h a t the spaces E and H n are in some special cases.
EXAMPLE
I.
Let G C R d
be a b o u n d e d domain and let E 0 = ~(G) be the set of C ~
functions of compact s u p p o r t in G.
Let H U 0 be the usual L 2  n o r m on G a n d set
0 n° w h e r e ~ is a m u l t i  i n d e x of length operator.
d/2, H
n
by Maurin's theorem.
To see why, note
embeds in Cb(G) by the Sobolev e m b e d d i n g t h e o r e m and , Pi < If 11 n 2n HS By T h e o r e m 4.1, M t has a v e r s i o n with values in
particular, M t e H _ d _ 2 , and if d is odd, we h a v e Mte H_d_1. a n a l y s i s h e r e w o u l d show that, locally at least, M t
6
H_2 n.
(A m o r e delicate
H n for any n > d/2.)
In
338
Exercise
4.2.
filtration,
Show that under
etc.)
that the p r o c e s s
has a right c o n t i n u o u s appropriate
the usual
Sobolev
version.
Mt,
hypotheses
considered
(i.e.
right
as a p r o c e s s
S h o w that it is a l s o right
continuous w i t h values
continuous
in ~(G),
in the
space.
Even pathology
has
its degrees.
The m a r t i n g a l e
M t will
certainly
not be a d i f f e r e n t i a b l e
or even c o n t i n u o u s
According
to the above,
it is at worst a d e r i v a t i v e
of order d + 2 of an L 2 f u n c t i o n
or, u s i n g
the e m b e d d i n g
theorem
3 of order ~ d + 3 of a c o n t i n u o u s
function.
Thus a d i s t r i b u t i o n
again, in H
n
H_n_1 , and the s t a t e m e n t
that M does
regarded
property
as a r e g u l a r i t y
a derivative
relevant
appropriate
space,
until we discuss
it is u s u a l l y process
nuclear
easier
indeed
as an even m o r e p r a c t i c a l
take values
than a d i s t r i b u t i o n
in a certain
most processes
H
in
can be
n
as h a v i n g values
and put off the task of d e c i d i n g
the regularity
of the process.
for it is often
than to verify
matter,
bad.
of M.
to do it this way;
is d i s t r i b u t i o n  v a l u e d
but it is not infinitely
is "more d i f f e r e n t i a b l e "
In the future we will discuss another
function,
measure
it lives
we shall u s u a l l y
which H
in ~(G)' is
n
As a p r a c t i c a l
much simpler in a given
matter,
to verify
Sobolov
or
that a
space...and
leave even that to the reader.
CHAPTER FIVE
PA~OLIC ~UATIONS iN£d Let {Mt, ~t' t ~ 0} be a worthy m a r t i n g a l e m e a s u r e on
R d with covariation
Let R(A) = E { K ( A ) } .
m e a s u r e Q ( d x dy ds) = d s and d o m i n a t i n g m e a s u r e K. Assume that for some p > 0 and all T > 0 I
~(dx dy ds) < ~.
(1÷IxlP,~1÷lylP~
RdX[0,T]
~ ( x ) M ( d x ds) exists for each ~ ~ ~(Rd). dx[0,T ]
T h e n Mr(#) =
Let L be a u n i f o r m l y e l l i p t i c s e l f  a d j o i n t s e c o n d order d i f f e r e n t i a l o p e r a t o r with b o u n d e d smooth coefficients.
Let T be a d i f f e r e n t i a l o p e r a t o r on
of finite order with b o u n d e d smooth coefficients. not on t).
d R
(Note that T and L operate on x,
C o n s i d e r the SPDE
(5.1) V(x,0) = 0 We will clearly need to let V and M have d i s t r i b u t i o n values, make sense of the t e r m TM.
if only to
We will suppose they have values in the Schwartz
space ~'(Rd). We want to cover two situations: holds in
Rd .
the first is the case in w h i c h
(5.1)
A l t h o u g h there are no b o u n d a r y conditions as such, the fact that
V t e ~ ' ( R d) implies a boundedness condition at infinity. The second is the case in w h i c h D is a b o u n d e d domain in
R d, a n d
h o m o g e n e o u s b o u n d a r y conditions are imposed on 5D. (There is a t h i r d situation w h i c h is covered  f o r m a l l y at least  by (5.1), and that is the case w h e r e T is an integral o p e r a t o r rather than a d i f f e r e n t i a l operator.
Suppose,
suitable functions g and h. r e a l  v a l u e d martingale,
for instance, that Tf(x) = g(x) f f ( y ) h ( y ) d y for
In that case, TMt(x) = g(x)Mt(h).
so that
Now Mt(h)
is a
(5.1) can be r e w r i t t e n
i dV t = LV dt + gdMt(h) V(x,0) = 0 This differs f r o m (5.1) in that the d r i v i n g term is a o n e  p a r a m e t e r
340
m a r t i n g a l e rather than a m a r t i n g a l e measure.
Its solutions have a r a d i c a l l y
different behavior from those of (5.1) and it deserves to be t r e a t e d separately.) Suppose
(5.1) holds on
integrate by parts.
R d.
Integrate it against ~e ~(Rd), and then
Let T* be the formal adjoint of T.
The weak form of
(5.1) is
then t t Vt( ~ ) = f V s ( L ~ ) d s + f IdT*#(x)M(dxds), 0 0 R
(5.2)
Notice that when we integrate by parts,
~ E s(Rd).
(5.2) follows easily for # of
compact support, but in order to pass to rapidly d e c r e a s i n g #, we must use the fact that V and TM do not grow too q u i c k l y at infinity. In case D is a b o u n d e d region with a smooth boundary,
let B be the o p e r a t o r
B = d(x)D N + e(x), where D N is the normal derivative on ~D, and d and e are in C~(SD).
C o n s i d e r the i n i t i a l  b o u n d a r y  v a l u e p r o b l e m = LV + TM
I
(5.3)
on D x [0,~);
BV = 0
on 5D ×
V(x,0) = 0
on D.
[0,~);
Let C~(D) and C0(D) be r e s p e c t i v e l y the set of smooth functions on D and the set of smooth functions with compact support in D.
Let C~(D) be the set of functions
in C~(D) whose derivatives all extend to continuous functions on D.
Finally,
let
~B = {¢ 6 C~(~): B~ = 0 on ~D}. The weak form of
(5.3) is t t Vt(~) = f V s ( L ~ ) d s + f f T #(x)M(dxds), ~ £ {B" 0 0D
(5.4)
This needs a w o r d of explanation.
To derive
(5.4) from (5.3), m u l t i p l y by
# and integrate f o r m a l l y over D x [0,t]  i.e. treat TM as if it were a d i f f e r e n t i a b l e function  and then use a form of Green's t h e o r e m to throw the d e r i v a t i v e s over on #. b o u n d a r y condition.
This w o r k s on the first integral if both V and ~ satisfy the
Unless T is of zeroth order,
M may not satisfy the b o u n d a r y conditions. D, however.)
Nevertheless,
it may not work for the second,
(It does w o r k if ~ has compact support in
the e q u a t i o n we w i s h to solve is (5.4), not (5.3).
The r e q u i r e m e n t that
for
(5.4) hold for all ~ s a t i s f y i n g the b o u n d a r y
conditions is e s s e n t i a l l y a b o u n d a r y condition on V.
The above situation,
in which we regard the integral, rather than the
differential equation as fundamental,
is analogous to many situations in which
physical reasoning leads one directly to an integral equation, and then mathematics takes over to extract the partial differential equation. derivations of the heat equation, NavierStokes
See the physicists'
equation, and Maxwell's equation,
for instance. As in the onevariable case, it is possible to treat test functions ~(x,t) of two variables.
Exercise 5.1.
Show that if V satisfies
(5.4) and if ~(x,t) is a smooth function such
that for each t, ~(.,t) E ~B' then t t Vt(~(t)) = ~ V (L~(s) + ~ (s))ds + ~ ~ T ~(x,s)S(dxds). o s ~; OD
(5.5)
Let Gt(x,y) be the Green's function for the homogeneous differential equation.
I If L = ~ 4, D =
R d, then
lyxl 2 Gt(x,y) = (2~t) d/2 e
2t
For a general L, Gt(x,y) will still be smooth except at t = 0, x = y, and its smoothness even extends to the boundary:
if t > 0, Gt(x,o) 6 C~(~).
It is positive,
and for • > 0,
lyxl 2 (5.6)
Gt(x,y) ~ Ct d/2 e
where C > 0 and 6 > 0. bounded D. in ~(Rd).
6t
(C may depend on ~).
,
x, y 6 D, 0 < t < ~,
This holds both for D =
If D = R d, Gt(x,.) is rapidly decreasing at infinity by (5.6), so it is Moreover,
for fixed y, (x,t) + Gt(x,y)
differential equation plus boundary conditions. Then if # is smooth, Go# = ~.
satisfies the homogeneous Define Gt(~,y) = fDGt(x,y)~(x)dx
This can be summarized in the integral
equation: t (5.7)
R d and for
Gts(#'Y) = ~(Y) + f Gus(L~'y)du' s
~ ~ ~B
342
The smoothness
of G then implies that if ~ 6 C (D), then Gt(~,.) 6 LB
In
case D = R d,, then ~ 6 =S(Rd) implies that Gt(~,,) ~_ =s(Rd)"
THEOREM 5.1. satisfies
There exists a unique process
(5.4).
{Vt, t)0} with values in ~'(R d) which
It is given by t
(5.8)
Vt(~) =
f f 0
T*Gt_s($,Y)M(dyds)R
d
The result for a bounded region is similar except for the uniqueness statement.
THEOREM 5.2 satisfies
There exists a process
(5.5).
this process
{Vt, t ~ 0} with values in ~'(R d) which
V can be extended to a stochastic process {Vt(#), t >__ 0, ~ ~ SB};
is unique.
It is given by t
Vt(~)
(5.9)
f f
=
T*St_s(~,y)M(dy
as), ~ 6 s B.
0 D
PROOF.
Let us first show uniqueness,
which we do by deriving
(5.9).
Choose ~(x,s) = Gt_s(~,x) , and suppose that U is a solution of (5.4). Consider Us(~(s)). so we can apply
Note that U0(~(0)) = 0 and Ut(~(t))
Now Gt_s(~,.) 6 ~B'
(5.5) to see that t
Ut(~) = Ut(~(t))
f
=
t Us(L~(s)
+ ~
(s))ds +
0 But I~ + ~
= Ut(~).
f f
T*~(x,s)M(dx
ds).
0 D
= 0 by (5.7) so this is t =
f fT* 0
~(X,S) M(dx ds)
D t
=
f f
T*Gt_s(~,x)M(dx
as) = vt(¢).
0 D
Existence:
Let ¢ ~ S B and plug (5.9) into the right hand side of (5.4): t
s
t
f [ f f T*Ss_u(L*,y)M(dy au)Jds + f f T**(y)S(dy,du) 0
0
D t
t
0 D
= f f [ ; T'Gs_uCL*,y)ds ÷ T**(~)]M(dy d~). 0 D
u
343
Note that T* Gs_u(L~,y)
and T*~(y) are bounded, so the integrals exist.
By (5.7)
this is t
= f ~ T*Gt_u(~,y)M(dy
du)
0 D
= vt(~)by (5.9). in S B.
This holds for any ~ 6 S B, but (5.9) also makes sense for % which are not
In particular,
it makes sense for ~ £ S(R d) and one can show using Corollary
4.2 that V t has a version which is a random tempered distribution. This proves Theorem 5.2.
The proof of Theorem 5.1 is nearly identical;
just
replace D by R d and S by s(Rd). B =
Q.E.D.
AN EIGENFUNCTION EXPANSION
We can learn a lot from an examination of the of the case T ~ I. is a bounded domain with a smooth boundary. conditions)
Suppose D
The operator L (plus boundary
admits a CONS {~j} of smooth eigenfunctions with eigenvalues kj.
satisfy (5.10)
(1+kj) p < ~
if p > d/2.
J (5.11)
sup "~j"~(1+kj)  p _
< ~
if p > d/2.
J Let us proceed formally for the moment.
We can expand the Green's
function: k .t
Gt(x,y) = ~ ~j(x)#j(y)e 3
3
If ~ is a test function k t
Gt(~,y) = [$j~j(y)e
J
3
where ^ ~j= Df~(x)~.(x)dx, 3
so by (5.9) t
vt(,) = f f
k(ts)
~ %j,j(y)~
J
MCdyds).
0 D j
Let
t Aj(t) = f0fD~j(y)e
k,(ts) 3
M(dyds).
These
344
Then (5.12)
vt($) This will converge
spaces H
n
introduced
eigenfunction
= [ SjAj(t).
for ~ e ~B' but we will show more.
in Ch. 4, Example
3.
H
n
is isomorphic
Let us recall
the
to the set of formal
series
f = j=~laj~j for which
We see from
(5.12)
that V t ~ [, A j ( t ) ~ b j .
3
PROPOSITION process
5.3.
Let V be defined by (5.12).
in H n; it is continuous
with T 5 I.
if t + M t is.
Moreover,
If M is a white noise based on Lebesgue
process
in H
PROOF.
We first bound E{sup A2(t)}. t d, V t is a right continuous
n
V is the solution
measure
then V is a continuous
for any n > d/2.
t Let Xj(t) = 10fD ~j(x) M(dxds)
3.3,
t k(ts) A(t) 3
= f e 0
3
=
where we have integrated
Xt
by parts
dX(s). 3 t A(ts) { k e 3 X.(s)ds 3
3
in the stochastic
integral.
t suplAj(t) I < sup t 0,
nVt÷sVtH2n = ~(Aj(t+s)Aj(t))2( 1+k j )n. The summands
are right continuous,
and they are continuous
if M is.
The sum is
dominated by 41 sup A2(t)(1+kJ )  n 3 j t d/2  I, not just for n > d/2.
one improves
the estimate
of E( sup t d/2 in
The conditions
For instance,
in (5.13) reduces to f ~2j(x)dxds.
of (5.4), the uniqueness
result
346
Exercise
5.2.
Verify
Exercise
5.3.
Treat
that V
of the s o l u t i o n
THEOREM
5.4.
n
above
involve directly
(5.12))
satisfies
u s i n g the Hermite
are a n a l o g o u s
derivatives.
Here
(5.4).
expansion
to the c l a s s i c a l
of E x a m p l e
Sobolev
is a result w h i c h
2, Ch.4.
spaces,
relates
but they
the r e g u l a r i t y
to d i f f e r e n t i a b i l i t y .
Suppose M is a white
exists a r e a l  v a l u e d exFonent
d
the case D = R
The spaces H don't e x p l i c i t l y
(defined by
process
noise
based on Lebesgue
U = {U(x,t):
I/4  £ for any E > 0 such that
xED,
t>__0} which
if D d1
measure. is HDlder
~d1 ~x2,...,~Xd,
T h e n there continuous
with
then
V t = DdIut .
Note.
This
is of course
w e a k =th d e r i v a t i v e
a derivative
of a f u n c t i o n
implies Note.
One m u s t be careful
but not identical moment.
Theorem
continuous result
spaces
process
Sobolev
as a c o n t i n u o u s
in c o m p a r i n g
H n of Example
5.4 might
Q is the
#,
(1)l~Iff(x)D~#(x)dx.
the c l a s s i c a l
that V t can be r e g a r d e d
A distribution
f if for each test function
Q(~) = If we let H n denote o
in the weak sense.
space
process
the c l a s s i c a l 3 in C h a p t e r
lead one to guess
of E x a m p l e
this
d+1 in H °
Sobolev
4.
I Ch.4,
spaces
H~ with related
Call the latter H n3 for the
that V is in H3 d+1,
in H3 n for any n > d/2 by P r o p o s i t i o n
but in fact,
This
5.3.
is a m u c h
it is a
sharper
if d > 3. This
gives us an idea of the b e h a v i o r ~v = L v 5t
.Suppose n o w that T is a d i f f e r e n t i a l coefficients,
so TL = LT.
the f o l l o w i n g
exercise
OU ~
a n d suppose
A p p l y T to both sides
= LU + TM.
makes
= LTV
Of course,
it rigorous.
of the e q u a t i o n
+ ~.
operator
~~ (TV) i.e. U = T V satisfies
of the solution
both T a n d L h a v e c o n s t a n t
of the SPDE:
+ this
argument
is p u r e l y
formal,
but
347
E x e r c i s e 5.4.
Suppose T and L commute.
Let U be the solution of (5.4) for a general
T with b o u n d e d smooth coefficients and let V be the solution for T  I.
V e r i f y that
if we r e s t r i c t U and V to the space D(D) of C~ functions of compact support in D, that U = TV.
E x e r c i s e 5.5.
Let V solve ~V 5t
I
Describe v(.,t)
REMARKS.
I.
 V + ~~ x W ' 52V 5x 2
=
0 < x < ~, t > 0;
~V 5V ~ x (0,t) = ~ x (~,t) = 0,
t > 0;
V(x,0) = 0 ,
0 < x < ~.
for f i x e d t.
(Hint:
use Exercises 5.4 and 3.5.)
T h e o r e m 5.2 lacks symmetry c o m p a r e d to T h e o r e m 5.1.
p r o c e s s in ~ ( R d) but must be e x t e n d e d slightly to get uniqueness, d o e s n ' t take values in ~'(Rd).
V t exists as a and this extension
It w o u l d be nicer to have a more symmetric statement,
on the o r d e r of "There exists a unique process with values in such and such a space such that
...".
One can get such a statement,
though it requires a litle m o r e
S o b o l e v space theory and a little more analysis to do it. Let ~ n n be the norm of Example ~B in this norm. (H;)' def n = HB .
I, C h a p t e r 4.
T h e o r e m 5.2 can then be stated in the form:
on D x R + as follows. V(~) =
If ~ = ~(x,t) f 0
Vs($(s))ds
(5.4).
compact support.
(5.5).
E x t e n d V to be a d i s t r i b u t i o n
is in C 0 ( D x (0,~)), let and
TM(~) =
f ~ T* $(x,s) M(dxds). 0D
Then C o r o l l a r y 4.2 implies that for a.e. ~, V and ~ Now consider
there exists a unique
(5.4) for all ~ 6 H Bn
Suppose that T is the identity and consider
D × (0,~).
Let H ; be the c o m p l e t i o n of
If n is large enough, one can show that V t is an element of
process V w i t h values in H Bn w h i c h satisfies 2.
Here is how.
define distributions on
For large t, the lefthand side vanishes,
The r i g h t  h a n d side then tells us that V(L~ + ~ )
In other words, for a.e. ~, the d i s t r i b u t i o n V(,~) (nonstochastic) PDE
~~ ~ 5t
=
for ~ has
+ TM(~) = 0 a.s.
is a d i s t r i b u t i o n s o l u t i o n of the
348
Thus T h e o r e m identity, (5.5)
5.1 follows
the same holds
into a PDE w i l l
from known n o n  s t o c h a s t i c for T h e o r e m
introduce
the t h e o r y of d i s t r i b u t i o n about
SPDE's.
5.2.
boundary
solutions
theorems
In general, terms.
the t r a n s l a t i o n
Still,
of d e t e r m i n i s t i c
on PDE's.
If T is the of
(5.4) or
we s h o u l d keep in m i n d t h a t
PDE's has s o m e t h i n g
to say
C H A P T E R SIX WEAK CONVERGENCE
Suppose E is a m e t r i c space with m e t r i c p. sets on E, and let ( P ) n really m e a n by "P
n
be a sequence of p r o b a b i l i t y m e a s u r e s on E. =
÷ P "? o
it has no unique answer.
approximation,
What do we
This is a n o n  m a t h e m a t i c a l question, of course.
a s k i n g us to make an intuitive idea precise. context,
Let ~ be the class of Borel
It is
Since our intuition will depend on the
Still, we might b e g i n with a reasonable first
see h o w it might be improved, and hope that our intuition agrees w i t h
our m a t h e m a t i c s at the end. Suppose we say: "p This looks promising, don't.
n
+ P
o
if Pn(A) + Po(A),
but it is too strong.
all A e ~."
Some sequences w h i c h should converge,
For instance, consider
P R O B L E M I.
Let P n = 61/n, the unit mass at l/n, and let P o = 6o.
to converge to Po' but it doesn't.
Certainly P n o u g h t
Indeed 0 = lim P {0} # P {0} = I. n o
Similar
things happen with sets like (~,0] and (0,1).
CURE.
The trouble occurs at the b o u n d a r y of the sets, so let us smooth t h e m out.
Identify a set A with its indicator function IA. Then P(A) = fIAdP.
We "smooth out
the b o u n d a r y of A" by r e p l a c i n g I A by a continuous function f w h i c h a p p r o x i m a t e s it, and ask that ~fdPn+ IfdP.
We may as well require this for all f, not just those
w h i c h a p p r o x i m a t e indicator functions. This leads us to the following.
Let C(E) be the set of b o u n d e d real valued
continuous functions on E.
DEFINITION.
We say P n converges weakl[ to P, and write Pn => P, if, for all
f £ C(E), ffdP n ÷ ffdP.
350
PROBLEM
CURE.
2.
Our n o t i o n of c o n v e r g e n c e
We p r e s c r i b e
def = {P: P is a p r o b a b i l i t y
s y s t e m of n e i g h b o r h o o d s
{P £ ~(E): This be discussing
interesting,
convergence
The first,
THEOREM
is s o m e t i m e s
6. I.
rather
it shortly,
called
The f o l l o w i n g
but
the P o r t m a n t e a u
is itself
extremely some facts.
characterizations
of w e a k
Theorem.
are e q u i v a l e n t
(ii)
f f d P n ÷ fdP,
(iii)
f f d P n ÷ ffdP,
(iv)
lim sup P (F) < P(F), n
all c l o s e d
(v)
lim inf P (G) > P(G), n
all open G;
(vi)
l i m Pn(A)
measure
 but it is
let us first e s t a b l i s h
P
probability
I ..... n.
than of r a n d o m v a r i a b l e s
(i)
n
i =
to fill our needs  for we shall
gives a number of e q u i v a l e n t
=> P ; all b o u n d e d all b o u n d e d
= P(A),
Let E and F be metric
ph1(A)
may not a p p e a r
fi E C(E),
The reason why it is s u f f i c i e n t
go into
which
on ~}.
is given by sets of the f o r m
of processes,
what we need.
and we shall
convergence,
measure
IIfidP  f f i d P o I < e, i=I ..... n},
notion of c o n v e r g e n c e
in fact exactly
w i t h a topology.
two definitions:
~(E) A fundamental
seems u n c o n n e c t e d
uniformly functions
continous
which are continuous,
and h
Pa.e.;
F;
all A ~ E such that P(~A)
spaces
f;
= 0.
: E + F a measurable
on E, then Ph I is a p r o b a b i l i t y
measure
map.
If P is a
on F, w h e r e
= p(h1(A)).
T H E O R E M 6.2
If h
: E + F is c o n t i n u o u s
(or just c o n t i n u o u s
Pa.e.)
and if P
n
=> P o n
E, then P h I => Ph 1 on F. n Let PI,P2,... Here
is one answer.
in A has a w e a k l y compact,"
be a sequence
in P(E).
Say t h a t a set K C
convergent
~(E)
subsequence.
but we f o l l o w the common
usage.)
When
does such a sequence
is [ e ! a t i v e l y
compact
converge?
if e v e r y s e q u e n c e
(This s h o u l d be " r e l a t i v e l y
sequentially
351
Then
(P) n
converges
(i)
there exists
weakly
if
a relatively
compact
set K C ~(E)
such that Pn e K for
all n. (ii)
the sequence
Since
has at m o s t one limit p o i n t
(i) g u a r a n t e e s
at least one limit point,
in ~(E). (i) and
(ii) t o g e t h e r
imply
convergence. If this c o n d i t i o n criterion
for r e l a t i v e
DEFINITION. KC
is to be useful
compactness.
A set A C ~(E)
This
6.3.
If A is tight,
and complete,
Theorem.
exists a compact
set
> I  E.
it is r e l a t i v e l y
then if A is r e l a t i v e l y
Let us return
is s u p p l i e d by P r o h o r o v ' s
is tight if for each E > 0 there
E such that for each P e A, P{K}
THEOREM
 and it is  we will need an e f f e c t i v e
compact.
compact,
to the q u e s t i o n
Conversely,
if E is s e p a r a b l e
it is tight.
of the s u i t a b i l i t y
of our d e f i n i t i o n
of w e a k
convergence.
PROBLEM
3.
We are i n t e r e s t e d
this all seems
CURE.
a space of, defined
it. say,
k n o w the s o l u t i o n We often
not r a n d o m variables,
so
We just have to stand b a c k far e n o u g h
a process
canonically
right c o n t i n u o u s
functions
on
= (~(t), ~ E Q,
function.
X is then d e t e r m i n e d
this means
that we are r e g a r d i n g simply
~ikes
W i t h this remark, p u t a metric
to this.
define
on Q by Xt(~)
random variable
of processes,
irrelevant.
We a l r e a d y
to r e c o g n i z e
in the b e h a v i o r
on the f u n c t i o n
will t h e n a p p l y to m e a s u r e s The Skorokhod
on a function
[0,~), then a p r o c e s s
for ~, b e i n g an e l e m e n t
of Q,
by its d i s t r i b u t i o n
P, w h i c h
the whole p r o c e s s
as a single
its values the o u t l i n e
space:
if Q is
{Xt: t>0}
is itself
is a m e a s u r e
can be
a
on Q.
But
random variable.
The
in a space of functions. of the theory
becomes
space Q in some c o n v e n i e n t
way.
clear.
We m u s t
The above
first
definition
o n Q.
space D = D([0,1],E)
It is the space of all f u n c t i o n s
is a c o n v e n i e n t
f : [0,1] ÷ E w h i c h
function
space to use.
are r i g h t  c o n t i n u o u s
and h a v e
352
left
limits
is m u c h
at each t £
(0,1].
like a supnorm,
We will m e t r i z e =D.
but the p r e s e n c e
The metric
is a bit tricky.
of jump d i s c o n t i n u i t i e s
forces
It
a
modification. First, [0,1]
let A be the class
onto itself.
If k e A,
llkll =
(We may have
of s t r i c t l y
then k(0)
sup 08o } and by one
so this is
< e
KI ~ k=0
(e
< e
KI ~ k=0
(e
K60_ 1/K +P{Sk+ISk I  e/2 j+1. Choose 6 k ~ 0 such that sup E{W(6k,Xn)}
< e  k2k+l
n 1 sup P{W(6k,X n) > ~ } ~ £/2 k+1. n
Thus
Let A C ~ be
I A = {~ e ~ : ~(t k) E Kk, w(~,6 k) ! ~ , Thus A has a compact closure
Now lim sup w(6,~) = 0. 6+0 ~EA
k=I,2 .... }.
in ~ by Theorem 6.5.
Moreover
P{X £A} > I  [ P{Xn(t k) £ ~ } n k  ~ P{W(6k,X n) > I/k} k > I  e/2  c/2 = I  ~, hence
(X) n
is tight.
MITOMA'S T H E O R E M
The subject of SPDE's involves need to know about the weak convergence not metrizable,
the preceeding
However,
distributions of processes
of distributionvalued
simple as that of realvalued processes.
According
to show that a sequence
(X n) of processes
tight,
each ~, the realvalued
processes
are tight.
(xn(~))
Rather than restrict ourselves
E' =
processes
Since ~' is
is almost as
to a theorem of Mitoma,
in order
one merely needs to verify that for
to S', we will use the somewhat more general
,.. ~
H_I~
H0~
HI~
,.. ~
~ n
Hn = E
Hilbert space with norm II Rn, E is dense in each Hn,
m M n ~ n nn+ I and for each n there is a p > n such that , gn topology
We will
Let
i ; Hn~ n
Where Hn is a separable
with values in ~'.
way.
theory does not apply directly.
weak convergence
setting of Chapter Four.
in a fundamental
< " " " HS P
E has the
determined by the norms n ~n" and E' has the strong topology which is
determined
by the semi norms PACf) = sup{ [f(+)l, ~ A }
where A is a bounded set in E. Let ~([0,1],E')
be the space of E'valued
right continuous
functions
which
359
have left limits in E', and let C([0,1],E') functions.
be the space of continuous E '  v a l u e d
C([0,1],H n) and ~([0,1],H n) are the c o r r e s p o n d i n g spaces of H n  V a l u e d
functions. If f,g E ~([0,1],E'),
let
dA(f,g) = inf{llkn + sup p A ( f ( t )  g ( k ( t ) ) , k e A}, t and
d A =( suptf pA(f(t)g(t)). , g ) Give ~([0,1],E') bounded A
E.
D([0,1],H =
(resp. C([0,1],E'))
the topology d e t e r m i n e d by the d A (resp. d A ) for
They both become complete,
) have already been defined,
separable,
for H
n
c o m p l e t e l y regular spaces.
The
is a metric space. n
We will need two "moduli of continuity". w(6,~;~) = inf
max
sup
i
ti 0 such that sup P{ sup IIX~II > M} < E. n 0 0 such that
11~,m < ~ => sup ~Isuplx?(~)lJ,
(6.5)
i
n
t
P
< e
t
taxi11 = E{IxI~I}
where
TO see this, consider the function
~ E.
F(~) = sup Illsup Xt(#),l , n t Then (i)
F(0) = 0;
(ii)
F(#) ) 0 and F(#) = F(~);
(iii)
lal < Ibl => F(a~) ~ F(b#);
(iv)
F is l o w e r  s e m i  c o n t i n u o u s on E;
(v)
lim F(~/n) = 0.
Indeed (i)(iii) are clear. n Ixtc~j~l~, + Ixtc,~l,,1 a.s.
If ~ 3 ÷ ~ in E, xn(#j ) ÷ X~(#)
in probability,
in L °, h e n c e
and lim inf[supIxn(~ )I^I] > s j t n 3  t p
Thus
~c~
= sup E{suplxt(,~l^~} < sup lira inf ~{supTx%~l^1} n
t
n
j
t
M} t
(xt(~)) is tight,
so, given ~ and E > 0 there exists
< £/2.
Choose k large enough so that M/k < £/2.
Then
F(~/k) = sup E{sup]X~(~/k)1^1} n
t
< sup [P{~uplx~(*/k)l>M} ÷ ~l k n
t
< E. Let V = {~: F(#) < £}. absorbing
(by (v)) set.
V is a closed (by (iv)), symmetric
We c l a i m it is a n e i g h b o r h o o d of 0.
(by (ii)),
Indeed, E =
~ nV, so n
by the Baire c a t e g o r y theorem, one, hence all, of the nV must have a n o n  e m p t y interior. of zero. proves
In particular,
I
~ V does.
I
I
Then V C ~ V  ~ V must c o n t a i n a n e i g h b o r h o o d
This in turn m u s t contain an element of the basis,
say {~: n#nm M}
e ~ 14 0.
I  e, X t lies in B = {x: < n n , hence q HS
P
Choose M and p as in Lemma 6.14.
K C ~([0,1],E')
P n
< HS
q
Hx~ p < M} for all t. U n
.
There exists q > p such
Then A is compact
in H
p
.
Let
q
be the set {~ : ~(t) E A, 0 I  e/23 for all n. under the map ~ ÷ {
(xn(e.)) 3
R) such that
Let K~3 be the inverse
: 0~t~I}.
is tight by
image of K.3 in =D([0'I]'E')
By the ArzelaAscoli
Theorem,
lim sup w(6,~;e ) = 0; 6÷0 ~eK~ 3 3 moreover
Set K' = K n /~ K!. • 3 J
Then P{X n e K'} > I  e  I£/29 = I  2g.
Nown
lim sup w(6,~,H ) = lim sup ([ 6+0 ~ K ' q 6*0 ~eK' j
inf max sup 2) I/2 {ti} i tiI Then
Let p < q and
(Xn) converges weakly in D([0,1],H
> M} 0 and define h0(x) = (I + IxlP0) I, x 6 R d •
If M is a worthy
m a r t i n g a l e m e a s u r e with d o m i n a t i n g m e a s u r e K, define an i n c r e a s i n g process k by (7.1)
k(t) =
f
h0(x) h0(Y) K(dx dy as),
R 2dx
[0,t]
and (7.2)
y(6) = sup (k(t+6)  k(t)) t O.
dr)
dr)
.
We claim the second
term tends to zero. Choose g > 0 and let ~ > 0 be such that if Ix] < D, then second
integral
is bounded
by
If(x) l 0 be such that T
0 and let n > 0 be such that
EI_ 2 and K > 0.
Let g be of Holder
Suppose further that the jumps of M n
Then
{U~, 0 < t < I} has a version which is right continuous
and has left
limits; (ii) there exists a constant E{ sup t I, Corollary
CrK(I + 2r).
1.2 implies that V n has a continuous
there exists a random variable
Z
n
version.
More exactly,
and a constant A', which does not depend on n, such
that for 0 < y < ~  I/r
It.v
Vn sup 0 0. inequality,
Dt
LP => W and Z are in L 2p.
(Hint:
By Prop. 8.1 and
Use induction on p = 2 n to
L p for all p, then use (8.8) and Doob's L P inequality as above.)
THEOREM 8.6.
Let (~n' An) be sequence of parameter
values and let (W n, Z n) be the
k corresponding
processes.
initial measure. on =~[0,I],
PROOF.
Let Vn(dx) = k1/2(n n(dx)  k dx) be the normalized n n
If the sequence
((~n+l)/kn)
is bounded,
then (Vn, W n, Z n) is tight
~'(Rd2+2d)}.
We regard V n as a constant process:
~{[0,I], ~'(Rd2+2d)}.
V n H V n, in order to define it on t
It is enough to prove the three are individually
By Mitoma's theorem it is enough to show that (vn(~)), are each tight for each
which is constant
criterion
(Theorem 6.8b) for the other two.
An(6) ffi (6d/kn)n sup t for all n so that for each t, (W~(~)) respectively.
and (Z~(#)) are tight on
By Theorem 6.8 the processes
(wn(~))
R d and
R
and (zn(~)) are each tight. Q.E.D.
THEOREM 8.7.
If A n + ~, ~nkn ÷ ~, and ~n/kn + 0, then
(V n, Z n, W n) =>
where V 0, Z 0 and W 0 are white noises based on Lebesgue measure on
R d,
Rd x
in
R+ respectively;
V 0 and Z 0 are realvalued
A n + = and ~n/kn + 0, (V n, W n) =>
PROOF.
Suppose k
n
and W 0 has values
Modifications
for nonintegral
T o show weak convergence,
we merely need to show convergence
dimensional
and invoke Theorem 6.15.
The initial distribution of k
n
independent
Poisson
motion.)
Rd •
If
is Poisson
(A n
k are trivial.
of the finite
and can thus be written as a sum
(I) point processes.
~I ^2 Let ~ , D ,... be a sequence of iid copies with k = changed notation:
R dx R+, and
(V 0, W0).
is an integer.
distributions
(V 0' Z 0, W0),
^n these are not the D used in constructing
Then the branching Brownian motion corresponding
I, ~ = ~n"
(We have
the branching Brownian to An, ~n has the same
k distribution
as ~I + ~2 + ... + ^~ n
the obvious way.
Define ~I
,
~2,
.
.. , WAI , ~2 W .... and ~I, ~2 ,... in
Then
Vn = ~I
k + ... + ~ n,
Wn
k ^1 ~ n W +...+W , Zn
x,,7n We have written
everything as sums of independent
proof, we will call on the classical Let ~I' ~ " convergence
~
n
random variables.
To finish the
Lindeberg theorem.
i ~I. ..... ~p' ~p" ~pii f =S i (R), t 1 < t 2 !
of the vector
k ZZAI+...+^ n
"" < % "
We must show weak
397
.. ..... Ztp n (#;)) d__ef(vn(#1)''°" ,vn(~p) • Wtln(#;) ..... W~p(~;), Ztln(~i)
Us
This can be written as a sum of lid vectors, and the mean and covariance of the vectors are independent of n (Prop. 8.1). It is enough to check the Lindeberg condition for each coordinate. distribution of ~
does not depend on ~n' so we leave this to the reader.
k n (~[) = ~I/2 n Ak Fix i and look at Wt. wt(*~). 1 k=1 1 an
The
^k , Now (Wt(#i)) is
Rd  v a l u e d c o n t i n u o u s m a r t i n g a l e , so by B u r k h o l d e r ' s i n e q u a l i t y ~k
~k
~{lwt(,[)l 4} ~ c 4 E{ll
2
}
l
t ~ td C 4
~
E{~(*~)2}ds
Now t < I so by Proposition 8.5 with k = I, there is a C independent of k and k 
n
such
that this is C(~n+ I). For e > 0,
1
by Schwartz.
1
1
1
Use Chebyshev with the above bound: !
[C(I + ~n)]1/2[C(1 + ~n)/k~E2] 1/2
~ C(I + ~n)/kne. Thus k
n
I/2Ak . I/2^k 2 E{Ikn Wt.(~i )12; ~ n Wt.(~i)l > E} k=1 i l ~1 2 ~1 2
= E{Iwt(~i) 1
I : Iwt(~i) 1
1 > kn E
}
~ C3(1 + ~n)/kne ÷ 0. Thus the Lindeberg condition holds for each of the W~(~[).~. .~ for the Zn t.(,i). 1
The same argument holds
In this case, while (Z~(~))~ is not a continuous martingale, its
jumps are uniformly bounded by (kn~n)I/2, which goes to zero, and we can apply Burkholder's inequality in the form of Theorem 7.11(i).
Thus the finitedimensional
distributions converge by Lindeberg's theorem, implying weak convergence.
398
The only place we used the hypothesis statement,
that A n ~n ÷ ~ was in this last
so that if we only have A n + ~, ~n/kn + 0, we still have
(V n,
w n) => (v°,w°). Q.E.D.
We have done the hard work and have arrived where we wanted to be, out of the woods
and in the cherry orchard.
We can now reach out and pick our results
from
the nearby boughs. Define,
for n = 0, I, ... t Rt(~) =
~
Ut(~) =
f 0
~Rd Gt_s(V~,y)
• Wn(dy
ds)
t
Recall implies
from Proposition
convergence
fd Gt_s(~,y)zn(dy R
ds).
7.8 that convergence
of the integrals.
of the martingale
It thus follows
immediately
measures
from Theorem
that
COROLLARY
8.8.
Suppose A n ÷ ~ and ~n/kn + 0. (V n, W n, R n) => (V 0, W 0, R0);
(i) (ii)
if, in addition,
An~ n + ~,
(V n, W n, Z n, R n, U n) =>
Rewrite
~t($)  k
In view of Corollary
8.9
(i)
(V 0, W 0, Z 0, R 0, U0).
(8.10) as
(8.13)
THEOREM
Then
V(Gt(~,o))
+ ~/~ Ut(~)
8.6 we can read off all the weak
If k
÷ ~ and ~n + 0, then
~t(~)
~0 = v0
for which ~ ÷ 0. k
converges
of the SPDE ~t = ~ ~
limits
 An n
D{ [0,1], __S'(Rd)} to a solution
+ Rt(~).
+ v~
in
8.7
399
(ii)
If An+ ~, ~n + ~ and ~ n / k n + 0, then
~(%)  kn
converges in ~{ [0,1], ~'(Rd)}
to a solution of the SPDE
/kn~ n
~
1
~0 = 0 .
(iii) If k
2 n + ~, kn~ n + ~ and ~n + c
~($)
 kn converges in
~ 0, then n
D{ [0,1], S'(Rd)} to a solution of t h e SPDE
St = ~ A ~
+ c~ + V.~
~0 = V0" T h e o r e m 8.9 covers the i n t e r e s t i n g limits in w h i c h k + ~ and ~/k + 0. T h e s e are all Gaussian.
The r e m a i n i n g limits are in general non Gaussian.
those in
w h i c h ~ and k both tend to finite limits are trivial enough to pass over here, w h i c h leaves us two cases (iv)
k + ~
and
~/k + c 2 > 0;
(v)
~ + 
and
~/k + =.
The limits in case
(v) turn out to be zero, as we will show below.
Thus
the only nontrivial, n o n  G a u s s i a n limit is case (iv), w h i c h leads to m e a s u r e  v a l u e d processes.
A MEASURE DIFFUSION
T H E O R E M 8.10
Suppose A n + = and ~ n / k n + c 2 > 0.
D__{[0,1], S'(Rd)}
to a p r o c e s s {~t" t ~
1 n Then ~ D t converges w e a k l y in n
[0,1]} w h i c h is continuous and has
measurevalues.
There are a number of proofs of this t h e o r e m in the literature
(see the
Notes), but all those w e k n o w of use s p e c i f i c p r o p e r t i e s of b r a n c h i n g p r o c e s s e s w h i c h we don't want to develop here, so we refer the reader to the references for the proof, and ~ i m i t ourselves to some formal remarks. We can get some idea of the b e h a v i o r of the l i m i t i n g p r o c e s s by r e w r i t i n g
400
(8.13) in the form I
(8.14) If (kn,~n)
I
~ Dt(~) = + CUt(~) + ~/E (V(Gt(~'')) is any sequence
+ ~I Rt(~)) "
I (iv), {(vn,wn,zn,Rn,U n, ~
satisfying
n)}
is tight by
n Theorem 8.6 and Proposition
7.8, hence we may choose a subsequence
converges
(V, W, Z, R, U, D)'
weakly to a limit
(8.15)
From
along which it
(8.14)
~t(~) = + cUt(~) t = + C
f 0
fd GtS(~'Y)Z(dy'ds)" R
In SPDE form this is 5t
(8.16)
~o(dX)
= dx
We can see several things from this. so is D.
Consequently,
nonGaussian
~t' being a positive
 Gaussian processes
In particular,
For one thing, n
distribution,
aren't p o s i t i v e
is positive,
is a measure.
hence
It must be
 so Z itself must be nonGaussian.
it is not a white noise.
Now ~0 is Lebesgue measure,
but if d > I, Dawson and Hochberg have shown
that ~ t is purely singular with respect to Lebesgue measure RoellyCoppoletta
has shown that ~t is absolutely
To get some idea of what the orthogonal from Proposition
for t > 0.
If d = I,
continuous. martingale
measure
Z is like, note
8.1 that t
=
f ~I 0
which suggests
D~(A)ds, n
that in the limit t t = ~ ~s (A)ds'
or, in terms of the measure
u of Corollary
(8.17)
v(dx,ds) This indicates
statistics
2.8,
= Ds(dX)ds.
why the SPDE (8.16) is not very useful for studying ~:
the
of Z are simply too closely connected with those of D, for Z vanishes
wherever ~ does, and ~ vanishes
on large sets  in fact on a set of full Lebesgue
measure if d > 2.
In fact, it seems easier to study D, which is a continuous
branching process,
than Z, so (8.16) effectively
expresses
state
D in terms of a process
401
which
is even
less understood.
w e r e w h i t e noises,
processes
Nevertheless, involving
a white
made
rigorous.
there
is a h e u r i s t i c is w o r t h w h i l e
understanding
A n d which,
contrasts
with cases
w h i c h we u n d e r s t a n d
noise w h i c h
give some i n t u i t i v e
This
(i)(iii),
rather well.
transformation giving.
of
to add, w i l l
(8.16)
into an SPDE
T h i s has b e e n u s e d by D a w s o n
of 11, but w h i c h has never,
we h a s t e n
in w h i c h Z a n d W
certainly
to our knowledge,
to
been
not be made r i g o r o u s
here. Let W be a r e a l  v a l u e d
white
noise
that Z has the same m e a n a n d c o v a r i a n c e
on
as Z'
R
d
x
R+
•
Then
(8.17)
indicates
where
t
z't(~) = f
fd ~ s (y) w(dy ds).
0
(If d = singular
measure,
I,
Ds(dY)
R
= Ds(Y)dy,
so /~s(y)
makes
so it is h a r d to see what /~D_ means,
sense.
If d > I, ~ s is a
but let's
not worry a b o u t
it. ) F
In d e r i v a t i v e
form,
/
Z' = ~? W
, w h i c h makes
it t e m p t i n g
to r e w r i t e
the SPDE
S
(8.16)
as
(8.18)
~= 5t
+
c~
It is not clear that this e q u a t i o n d = I, it is not clear w h a t
its c o n n e c t i o n
limit of the i n f i n i t e p a r t i c l e
system,
~ has any m e a n i n g
is w i t h
if d ~
the p r o c e s s
so it remains
2, a n d even if
D which
is the w e a k
one of the c u r i o s i t i e s
of the
subject.
THE CASE ~
REMARKS. three
One of the f e a t u r e s
sources
In case
completely,
three
measure,
(i), the b r a n c h i n g
distribution
I term ~d
 initial
of T h e o r e m
the d i f f u s i o n
becomes
~, while the noise comes
effects
contribute
8.9 is that it allows us to see w h i c h of the
diffusion,
is n e g l i g e a b l e
and the diffusion.
In case
÷
or b r a n c h i n g
and the noise
 drives comes
(ii) the initial
deterministic entirely
to the noise term.
the limit process.
f r o m the i n i t i a l
distribution
and only c o n t r i b u t e s
f r o m the branching. In case
washes
out
to the drift
In case
(iii),
(iv), the m e a s u r e  v a l u e d
all
402
diffusion, we see f r o m (8.16) that the initial d i s t r i b u t i o n and d i f f u s i o n both b e c o m e deterministic, while the randomness comes e n t i r e l y from the branching. In case w a s h out.
(v), which we will analyze now, it turns out that all the sources
N o t i c e that T h e o r e m 8.6 doesn't apply w h e n ~/k + ~, and in fact w e can't
a f f i r m that the family is tight. way.
Nevertheless, ~ tends to zero in a rather s t r o n g
In fact the u n n o r m a l i z e d p r o c e s s tends to zero.
T H E O R E M 8.11. (i)
Let k ÷ ~ and ~/k ÷ ~.
P A , M {Dr(K) = 0, all t 6
and, if d = (ii)
Then for any compact set K C R d and ~ > 0 [c,I/~]} ÷ I
I,
Pk,~{~t(K) = 0, all t ~ ~} ~
I.
Before p r o v i n g this we need to look at f i r s t  h i t t i n g times for b r a n c h i n g B r o w n i a n motions.
This d i s c u s s i o n is c o m p l i c a t e d by the p r o f u s i o n of particles: m a n y
of them may hit a given set.
To which belongs the honor of first entry?
The type of first h i t t i n g time we have in mind uses the implicit p a r t i a l o r d e r i n g of the b r a n c h i n g p r o c e s s  its p a t h s form a tree, a f t e r all  and those familiar w i t h two p a r a m e t e r m a r t i n g a l e s might be i n t e r e s t e d to compare these w i t h s t o p p i n g lines.
Suppose that {X ~, ~ £ ~} is the family of processes we c o n s t r u c t e d at the b e g i n n i n g of the chapter, and let A C R d be a Borel set.
For each ~, let
~A = inf{t > 0: X ~t e A}, and define T A ~ by T ~ = { ~A A
if ~
= ~ for all ~ < ~, ~ # ~;
otherwise
The time T~ is our a n a l o g u e of a first h i t t i n g time.
N o t i c e that T A m a y be
finite for m a n y different ~, but if ~ ~ ~, T~ and ~A c a n ' t both be finite. for example, the first entrance T E of the B r i t i s h c i t i z e n e r y to an earldom. individual  call h i m ~  is created the first Earl of Emsworth,
Consider, If an
some of his
d e s c e n d a n t s may inherit the title, but his e l e v a t i o n is the vital one, so o n l y T ~ is E
finite.
On the other hand, a first cousin  call h i m ~  m a y be c r e a t e d the f i r s t
Earl of Ickenham;
then T~ will also be finite. E
403
In general~
if ~ ~ ~ and if T~A and T~ are both finite,
of X ~ and of X ~ form disjoint independence
families.
(Why?)
of the different particles,
conditionally
independent
By the strong Markov property
the postT~ and postT~ processes
cf the branching
rate ~ which starts with a single particle X I at x. (nonbranching)
(symmetry),
PROPOSITION
8.12.
x
Rd •
x then, X tI is a n Under PO"
Brownian motion.
it
is more complicated
For any Borel set A ~
arguments
A is compact and ~ has compact support and w r i t e T ~ and ~
it is true for the
R n x R+
, with ~(x,~) = 0,
Rd =
By standard capacity
While
and rates a detailed proof°
Let ~(x,t) be a bounded Borel function
%{ x PROOF.
are
Brownian motion with branching
The following result is a fancy version of (8.4). same reason
and the
given X ~ (T  A) ~ and X~(T~)
Let pX be the distribution
ordinary
then the descendants
(~A),TA) } •
it is enough to prove this for the case where in
R d × [0,~).
We will drop the subscript
instead of T A and t A.
Define u(x,t) = E0{~(X11,t
+ I)}.
is a martingale,
so that we can conclude
5~~ U + ~I A U = 0.
Thus by Ito's formula
Note that {u(X I I' t ^ I ) , t A T
t >__ 0}
that u ~ C (2) on the open set A c x
R+ and
Yt g=ef ! u(X ~ t ^ T ~) tAT ~' t
= u(x,0) +
[
Vu(x~,s) • ~
f h'XCs) z { s ~(~)}.
independent
on
(X~, ~ )
vanishes
on {t < ~(~)}, and equals v~ =
also vanishes
Thus in all cases E{ 0.
on the set {s
~(~), T ~ _> C(~)}.
< t ^ T ~}
so
(This even holds if T u = ~, since both sides vanish then.)
Thus u(x,0)
= E~{lim Yt } = E~{ ! *(X~'T T~)}" Q.E.D.
REMARKS.
This implies
that the hitting probabilities
of the branching
Brownian
motion are dominated by those of Brownian motion  just take ~  I and note that the left hand side of (8.14) dominates
EX{sup ~(XT~,T~)}
implies
(8.14)
that the left hand side of
= pX{T~< ~, some ~}.
is independent
of ~.
We need several results before we can prove Theorem 8.11. treat the case d = I.
Let D be the unit interval H(x)
PROPOSITION
PROOF.
8.14.
H
(x)
= £~
(x 
I  /~)2
u" = ~ u u(1)
=
R I and put
if
2
x >
I.
is the unique solution of
on
(I,~)
on
(I,~),
I
0 < u < I
since it is easily verified that the given expression Let T = inf T ~.
Let us first
= px{T~ < ~, some ~}.
This will follow once we show that H
(8.19)
in
If x > 1, Proposition
It also
satisfies
8.12 implies
(8.19).
405
(8.20)
pX{TIX < h} = P0{TX < h} = o(h)
as h ÷ 0.
Let ~ be the first b r a n c h i n g time of the process.
Then
+ pX{~ < h, ~ A h < T < }. IX The first p r o b a b i l i t y is o(h) by (8.20). to the latter two.
A p p l y the strong M a r k o v p r o p e r t y at ~ ^ h
If ~ > h, there is still only one particle• X I, alive, so T = T I
and the p r o b a b i l i t y equals E {~ > h, HIX(X~Ah)} + o(h), where the o(h) comes from i g n o r i n g the p o s s i b i l i t y that T < ~ ~ h. i n d e p e n d e n t particles, X 11 and X 12 +
 I
I{T (W 0, ~]).
The idea of the proof is the same:
hence so does the stochastic
integral.
7.6 and 7.8, for the integrand is not in ~s. k Define
•
However,
measure
we can't use Propositions
We will use 7.12 and 7.13 instead.
k
0 (V , W 0) and (~ n, ~ n) canonically
denote their probability
the martingale
It
distributions
on D = D([0,1],
by p0 and P respectively.
S'(R2d)),
and
By (8.23) we can
k define Dt
n
on D simultaneously
the stochastic
integrals
zero for all n > 0.
for n = 0,1,2,...
Thus we can also define
for each s,t and x, independent
W01
k n (¢)VP X(x)~ n d×[0,t]s ts xs
Hint.
of n.
Show g(o,.,t0)I{s d/2 this is
2 < C2nT*K(¢," ) [1q

T is a differential
operator of order k, hence it is bounded from Hq+ k ÷ Hq,
while K maps Hq+k_ 2 ÷ Hq+ k boundedly.
Thus the above is
< C4W ~" 2+k_2 It follows that U is continuous
in probability
4. I, it is a random linear functional
on H
on Hq+k_ 2 and, by T h e o r e m
for any p > q + k  2 + d/2.
Fix
P a p > d + k  2 and let n = p + 2.
Then U 6 H_n.
(It is much easier to see
418
that U ~ ~'(Rd). Corollary
Just note that T'K(#,.)
is bounded if # G ~ ( R d) and apply
4.2).
If ~ e
SO" U(A#) =
=
On the other hand, C 0 C continuous
on H
f D
T*K(A~,y)M(dy)
f T*~(y)M(dy). D
~0' and C O is dense in all the H t.
SO the map ~ ÷ 4# ÷ U ( ~ )
p
w h i l e on the righthand
of H
n
÷ H
p
+
R
U is
is continuous,
side of (9.5)
E{I fd T* ¢~j2} ~C,T.¢,2®~c,¢,~+q. R
L
which tells us the righthand hence,
side is continuous
by Theorem 4.1, it is a linear functional
in probability
on Hk+q,
on Hd+ k = H n.
Thus
(9.5)
holds for # ~ H n.
Q.E.D.
LIMITS OF THE BROWNIAN DENSITY PROCESS
The Brownian density process
Dt satisfies
the equation
(9.7)
~ = ! A~ + a V.& + bZ at 2
where W is a ddimensional
white noise and z is an independent
onedimensional
white noise,
both on
R d x R+, and the coefficients
are constants.
(They depend on the limiting behavior
of ~ and k.)
Let us ask if the process has a weak limit as t ÷ ~. to see that the process blows up in dimensions The Green's Green's
d =
function G t for the heat equation on
function K for Laplace's
(9.8)
equation by
K(x,y) =  f G t ( x , y ) d t 0
and K itself is given by Cd K(x,y) = iy_x id2 where C d is a constant.
'
The solution of (9.7) is
a and b
It is not too h a r d
I and 2, so suppose d ~ 3. R d is related to the
419
t
Tit(C) = T)oGt(~)) + a 0
t ~Rd VGt_s(~,y)W(dy
dGt_s(¢,Y)Z(dyds)
ds) + b 0
d~f
~0Gt(~)
and U t are meanzero
+ a Rt(~)
Gaussian
R
+ b Ut(~).
processes.
The covariance
of R t is
t E{Rt(~)Rt(~) } =
~ 0
fRd(VxGt_s)(~,y)(VxOt_s)(~,y)dy
if we then integrate
Now VxG = V G; Y
ds .
by parts
t = 
~0
= 
~ 0
~d AyGts(~'Y)Gts(~'y)dy
ds
t Id Gts(A#'Y)Gts (y'(~)dy ds R
=  f 0
G2t_2s(d~,~)ds 2t
by (5.7).
=  ~
d ~(Y)
=  ~
d ~(Y) [~(x) + G2t(x,~)]dy
Since d ~ 3, G t ÷ 0
f 0
Gu(d#,y)ds
as t + ~
dy
so
I E{Rt(#)Rt(~) } + ~ .
(9.9) The calculation
for U is easier since we don't need to integrate
by
parts: t E{Ut(~)Ut(~)}
=
=
as t ÷ ~.
PROPOSITION
9.3.
2
ds
2t
!
Suppose d ~ 3.
~
G2tu(0'~)du
÷
~(¢,~)
 2""
we see:
AS t ÷ ~, / 2 R t converges
to a random Gaussian
tempered
weakly
to a white
distribution
function
(9.10)
E{U(¢)U(~)}
In particular, convergence
~d Gts(#'Y)Gts($'y)dy
Taking this and (9.9) into account,
noise and / ~ U t converges covariance
f0
Dt converges
of S'(Rd)valued
= K(¢,~).
weakly as t ÷ m. random variables
The convergence in all cases.
is weak
with
420
E x e r c i s e 9.1.
DEFINITION.
Fill in the details of t h e c o n v e r g e n c e argument.
The mean zero Gaussian process {U(#): ~ • ~(Rd)} w i t h c o v a r i a n c e
(9.10) is called the E u c l i d e a n free field.
C O N N E C T I O N W I T H SPDE's
We can get an integral r e p r e s e n t a t i o n of the free field U from P r o p o s i t i o n 9.3, for the w e a k limit of / 2 U t has the same d i s t r i b u t i o n as
fd 0
Gs(*'Y)Z(dY as).
R
This is not enlightening; we w o u l d prefer a r e p r e s e n t a t i o n i n d e p e n d e n t of time. on
This is not h a r d to find.
R d (not on
Rd ×
(9.11)
R+ as before)
U(~)
if ~,
~e
ffi
Let W be a d  d i m e n s i o n a l white noise and, for # £ ~(Rd),
define
fRd?K(#,y).W(dy).
~(Rd),
E{U(¢)U(%)} =
f dV~(~,y).VK(%,y)dy R
=

fRd K(~,y)~K(~,y)dy
=  f d K(~,y)~(y)dy R
=  K(~,~).
(This shows a p o s t e r i o r i that U(#) is definedl) G a u s s i a n process,
it is a free field.
P R O P O S I T I O N 9.4.
U satisfies the SPDE
(9.12) PROOF.
~ U = V.W U(~)
=
f
VK(~#,y) oW(dy) R
= f
V*(y).W(dy) R
Thus, as U(#) is a mean zero
421
since for ~ E s(Rd), K(A~,y) = ~(y).
But this is the weak form of (9o12). Q.E.D.
Exercise 9.1.
Convince yourself that for a.e.~,
(9.12) is an equation in
distributions.
SMOOTHNESS
Since we are working on
R d, we can use the Fourier transform.
be the Sobolev space defined in Example
la, Chapter 4.
~t st
If u is any
distribution, we say u ¢ H_loc t if for any ~ e CO, ~u E H t
PROPOSITION 9.5.
Let £ > 0.
loc Then with probability one, W E H_d/2_£ and
loc U & H1_d_2_£,/ where U is the free field.
PROOF. ~W(~) =
The Fourier transform of ~W is a function: ~
e2Ki~'x~ (x)W(dx)
R
and
E{I¢I * I~12~t/2~¢~I 2} = ¢I * I~12~ t ff ~¢x~Cy~e2~iCYx~'~dy~, SO
_< c I c, + l~12~td~ which is finite if 2t < d, in which case n~wn t is evidently finite a.s. loc Now VoW £ N_d/2_1_e so, since U satisfies AU = VW, the elliptic loc regularity theorem of PDE's tells us U g Hl_e_d/2.
Q.E.D.
THE MARKOV PROPERTY OF THE FREE FIELD
We discussed L~vy's Markov and sharp Markov properties in Chapter One, in connection with the Brownian sheet.
They make sense for general
422
distributionvalued G*° =D
This
processes,
involves
extending
but one must
space,
tells us that it h a s a trace on c e r t a i n since we w a n t to talk about
works
the S o b o l e v
lowerdimensional
its values
on
on rather
R d, let us define
if ~ is of the f o r m ~(dx)
is s u f f c i e n t l y nice" m e a n s
nice.
embedding
theorem
manifolds.
But
irregular
sets, we will use
= ~(x)dx,
By the c a l c u l a t i o n
U(~)
by
This c e r t a i n l y
and it w i l l c o n t i n u e following
(9.11),
n~li~=  [ [ ~(dx)K(x,y)~(dy)
Let =E+
(9.11).
to work if
"sufficiently
that
(9.13)
=+
~D a n d
direct method. If ~ is a m e a s u r e
E
the ofields
the d i s t r i b u t i o n .
S i n c e U takes v a l u e s on a S o b o l e v
a more
first define
be the class
of m e a s u r e s
on
R d which
0, all A C R d  B}.
that there be m e a s u r e s
G = ~{U(~): =B
relative
U(A)
of the r e s t r i c t i o n
9.6.
~ ~A ADB A open
follows
open sets in
easily
set of c a p a c i t y
f r o m the b a l a y a g e
for all y, and K(v,y) zero in ~D.
L~vy's
sharp M a r k o v p r o p e r t y
R d.
set a n d if ~ is s u p p o r t e d by D c, there
K(~,y) ~ K(~,y)
R d  B}
"
The free field U satisfies
to b o u n d e d
This
~ E ~, ~(A) = 0, all A C
We call
property
exists
= K(~,y)
of K:
a measure
for all y ~
v the b a l a y a g e
if D C
R d is an
v on 5D s u c h D, and all but a
of ~ on ~D.
423
Suppose ~ E ~ and supp ~ C D c .
If
v is the balayage
of ~ on ~D, we
claim that (9.14)
E{U(~)I~
} = U(v). D
This will do it since,
as U(~)
is ~SDmeasurable,
the lefthand
side of
(9.~4) must be E { U ( ~ ) I ~ S D }. Note that v e ~ (for ~ is and K(v, o) ! K(~,))
so if k e ~,
supp(k) C 5, E{(U(~)
 U(v))U(A)}
= f[K(~,y)
 K(v,y)]k(dy)
= 0 since K(~,x) = K(v,x)
for a set of capacity
zero, and
k, being of finite energy,
does not charge sets of capacity zero.
Thus the
integrand vanishes ka.e.
But we are dealing with Gaussian processes,
this implies
(9.14).
on D, except possibly
so Q.E.D.
NOTES
We omitted most references from the body of the t e x t  a c o n s e q u e n c e of p u t t i n g off the b i b l i o g r a p h y till last  and we will try to remedy that here.
Our
r e f e r e n c e s w i l l be rather sketchy  you may put that down to a lack of s c h o l a r s h i p and we list the sources from w h i c h we p e r s o n a l l y have learned things, w h i c h may not be the sources in w h i c h they o r i g i n a l l y appeared.
We apologize in advance to the
m a n y w h o s e w o r k we have s l i g h t e d in this way.
C H A P T E R ONE
The Brownian sheet was introduced by Kitagawa in
[37], though it is u s u a l l y
c r e d i t e d to others, p e r h a p s because he failed to p r o v e the u n d e r l y i n g m e a s u r e was countably additive.
This o m i s s i o n looks less serious now than it did then.
The G a r s i a  R o d e m i c h  R u m s e y T h e o r e m o n e  p a r a m e t e r processes in article
(Theorem 1.1) was p r o v e d for
[23], and was p r o v e d in general in the brief and e l e g a n t
[22], w h i c h is the source of this proof.
This c o m m o n l y gives the right order
of m a g n i t u d e for the modulus of c o n t i n u i t y of a process,
but doesn't n e c e s s a r i l y give
the best constant, as, for example,
The exact m o d u l u s of
in P r o p o s i t i o n
1.4.
c o n t i n u i t y there, as well as many o t h e r i n t e r e s t i n g s a m p l e  p a t h p r o p e r t i e s of the B r o w n i a n sheet, may be f o u n d in Orey and Pruitt
[49].
K o l m o g o r o v ' s T h e o r e m is u s u a l l y stated more simply than in C o r o l l a r y In p a r t i c u l a r ,
the extra log terms there are a bit of an affectation.
curious to see how far one can go w i t h n o n  G a u s s i a n processes. v a l i d for r e a l  v a l u e d processes, processes.
See for example
We just were
Our v e r s i o n is o n l y
but the t h e o r e m holds for m e t r i c  s p a c e v a l u e d
[44, p.519].
The Markov p r o p e r t y of the Brownian sheet was p r o v e d by L. Pitt s p l i t t i n g f i e l d is i d e n t i f i e d in [59]; the p r o o f there is due to S. Orey communication.)
1.2.
[52]. (private
The
425
The propagation of singularities in the Brownian sheet is studied in detail in [56].
Orey and Taylor showed the existence of singular points of the Brownian
path and determined their Hausdorff dimension in [50].
Proposition 1.7 is due to G.
Zimmerman [63], with a quite different proof. The connection of the vibrating string and the Brownian sheet is due to E. Caba~a [8], who worked it out in the case of a finite string, which is harder than the infinite string we treat.
He also discusses the energy of the string.
CHAPTER TWO
In terms of the mathematical techniques involved, one can split up much of the study of SPDE's into two parts:
that in which the underlying noise has
nuclear covariance, and that in which it is a white noise.
The former leads
naturally to Hilbert space methods; these don't suffice to handle white noise, which leads to some fairly exotic functional analysis.
This chapter is an attempt to
combine the two in a (nearly) real variable setting.
The integral constructed here
may be technically new, but all the important cases can also be handled by previous integrals. (We should explain that we did not have time or space in these notes to cover SPDE's driven by martingale measures with nuclear covariance, so that we never take advantage of the integral's full generality). Integration with respect to orthogonal martingale measures, which include white noise, goes back at least to Gihman and Skorohod [25].
(They assumed as part
of their definition that the measures are worthy, but this assumption is unnecessary; c.f.
Corollary 2.9.) Integrals with respect to martingale measures having nuclear covariance
have been wellstudied, though not in those terms. in M~tivier and Pellaumeil [46].
An excellent account can be found
They handle the case of "cylindrical processes",
(which include white noise) separately. The measure ~ of Corollary 2.8 is a Dol~ans measure at heart, although we haven't put it in the usual form.
True Dol~ans measures for such processes have been
426
constructed by Huang,
[31].
Proposition 2.10 is due to J. Watkins
[61].
Bakry's example can be found
in [2].
CHAPTER THREE
The linear wave and cable equations driven by white and colored noise have been treated numerous times.
Dawson
[13] gives an account of these and similar
equations. The existence and uniqueness of the solution of (3.5) were established by Dawson [14].
The LPboundednes and Holder continuity of the paths are new.
See [57]
for a detailed account of the sample path behavior in the linear case and for more on the barrier problem.
The wave equation has been treated in the literature of twoparameter processes, going back to R. Cairoli's 1972 article
[ 9 ] . The setting there is special
because of the nature of the domain: on these domains, only the initial position need be specified, not the velocity.
As indicated in Exercises 3.4 and 3.5, one can extend Theorem 3.2 and Corollary 3.4, with virtually the same proof, to the equation ~V
~2V
at
5x 2
÷ g(V,t) + f(v,t)~,
where both f and g satisfy Lipschitz conditions. systems in which g is potential term.
Such equations can model physical
Faris and JonaLasinio [19] have used similar
equations to model the "tunnelling" of a system from one stable state to another. We chose reflecting boundary conditions in (3.5) and (3.5b) for convenience.
They can be replaced by general linear homogeneous boundary conditions;
the important point is that the Green's funciton satisfies (3.6) and (3.7), which hold in general [27].
427
CHAPTER FOUR
We follow and
some u n p u b l i s h e d
lecture
notes
See also
of the Ito here.
[24]
[34].
CHAPTER FIVE
The t e c h n i q u e s elliptic
operator.
lower order pole,
u s e d to solve
(5.1)
In fact the G r e e n ' s so that the solutions
also w o r k w h e n L is a h i g h e r o r d e r
function
for h i g h e r order o p e r a t o r s
are b e t t e r b e h a v e d
has a
t h a n in the s e c o n d  o r d e r
case. We suspect studies
a special
the solution
that T h e o r e m
case in
[33].
can be found in
these a n d similar
5.1 goes b a c k to the mists of antiquity. Theorem
[58].
5.4 and other
See Da Prato
The basic r e f e r e n c e theorem
treatment
here.
selfcontained. distributions,
on the sample paths
for another
of
point of v i e w on
theorems.
CHAPTER
Aldous'
[12]
results
Ito
is in
[I], and Kurtz'
Mitoma's Fouque which
on w e a k
theorem [21] has
includes
SIX
convergence criterion
is p r o v e d
in
generalized
the f a m i l i a r
remains
is in
Billingsley's
[42].
spaces
D(Q).
[5].
We f o l l o w Kurtz'
[47], but the article
this to a larger
book
is not
class of spaces
His proof
of
is close to t h a t
of Mitoma.
C H A P T E R SEVEN
It m a y not be obvious
f r o m the e x p o s i t i o n
 b u t the first p a r t of the c h a p t e r The accounts
for its r e l a t i v e l y
Theorems
general
is d e s i g n e d
elementary
enough
to handle
 in fact we took care to h i d e it
to handle
deterministic
integrands.
character. the r a n d o m i n t e g r a n d s
met
in p r a c t i c e
428
seem to be delicate; we were s u r p r i s e d to find out how little is known, even in the classical case.
Our w o r k in the section "an extension" is just a first attempt in
that direction. P e t e r Kotelenez showed us the p r o o f of P r o p o s i t i o n 7.8. due to K a l l i a n p u r and W o l p e r t [57].
[36]°
An earlier,
T h e o r e m 7.10 is
clumsier v e r s i o n can be found in
The B U r k h o l d e r  D a v i s  G u n d y t h e o r e m is surmnarized in its most h i g h l y d e v e l o p e d
form in
[7].
CHAPTER EIGHT
This chapter completes a cycle of results on weak limits of Poisson systems of b r a n c h i n g B r o w n i a n motion due to a number of authors. strong a word,
"Completes"
is p e r h a p s too
for these point in many directions and we have only f o l l o w e d one: to
find all p o s s i b l e w e a k limits of a certain class of infinite p a r t i c l e systems, and to connect t h e m with SPDE's. T h e s e systems were i n v e s t i g a t e d by M a r t i n  L o f nonbranching particles
(~ = 0 in our terminology)
w h o c o n s i d e r e d b r a n c h i n g B r o w n i a n motions in results look s u p e r f i c i a l l y d i f f e r e n t since,
Rd
[45] who c o n s i d e r e d
and by Holley and Stroock
[29],
with p a r a m e t e r s k = ~ = I ; their
instead of letting ~ and k tend to
infinity, they rescale the p r o c e s s in both space and time by r e p l a c i n g x by x/~ a n d t by ~2t. d
B e c a u s e of the B r o w n i a n scaling, this has the same effect as r e p l a c i n g k by
2 and ~ by ~ , and leaving x and t unscaled.
~/k = ~
2d
The critical p a r a m e t e r is then
, so their results depend on the dimension d of the space.
If d > 3, they
find a G a u s s i a n limit (case (ii) of T h e o r e m 8.9), if d = 2 they have the m e a s u r e  v a l u e d diffusion 8.11). [33],
(case (iv)) and if d = I, the p r o c e s s tends to zero (Theorem
The case ~ = 0, i n v e s t i g a t e d by MartinLof and, with some differences, [34], also leads to a G a u s s i a n limit
Gorostitza 8.9(iii)
if ~ > 0).
by Ito
(Theorem 8.9 (i)).
[26] t r e a t e d the case w h e r e ~ is fixed and k ÷ ~
(Theorem
H e also gets a d e c o m p o s i t i o n of the noise into two parts, b u t it
is different from ours; he has p o i n t e d out not in fact independent.
[26, Correction]
that the two parts are
429
T h e n o n  G a u s s i a n case
(case (iv)) is e x t r e m e l y i n t e r e s t i n g a n d has been
i n v e s t i g a t e d b y numerous authors.
S. W a t a n a b e
[60] p r o v e d the c o n v e r g e n c e of the
system to a m e a s u r e  v a l u e d diffusion.
Different proofs have been given by Dawson
[13], Kurtz
[53].
[42], and R o e l l y  C o p o l e t t a
D a w s o n and H o c h b e r g
[15] have looked at
the Hausdorff d i m e n s i o n of the support of the m e a s u r e and s h o w e d it is singular w i t h respect to L e b e s g u e m e a s u r e if d > 2.
(RoellyCopoletta
[53]).
It is a b s o l u t e l y continuous if d = I
A related equation which can be w r i t t e n suggestively as an = ~ A~ + ~ ( 1  ~ ) w at 2
has been studied by F l e m i n g a n d Viot
[20].
The case in w h i c h ~/k ÷ ~ comes up in H o l l e y and Stroock's p a p e r if d = I. The results p r e s e n t e d here, which are stronger, are joint w o r k with E. Perkins and J. Watkins,
and appear here w i t h their permission.
The noise W of P r o p o s i t i o n 8.1 is
due to E. Perkins who u s e d it to translate Ito's work into the setting of SPDE's relative to m a r t i n g a l e m e a s u r e s
(private communication.)
A more general and m o r e s o p h i s t i c a t e d c o n s t r u c t i o n of b r a n c h i n g d i f f u s i o n s can be found in Ikeda, Nagasawa, and W a t a n a b e
[32].
H o l l e y and Stroock also give a
construction.
The square process Q is c o n n e c t e d w i t h U statistics.
Dynkin and M a n d e l b a u m
[17] showed that c e r t a i n central limit theorems i n v o l v i n g U statistics lead to m u l t i p l e Wiener integrals, and we wish to thank Dynkin for s u g g e s t i n g that our m e t h o d s m i g h t handle the case when the p a r t i c l e s were d i f f u s i n g in time.
In fact
T h e o r e m 8.18 m i g h t be v i e w e d as a central limit t h e o r e m for c e r t a i n U  s t a t i s t i c s e v o l v i n g in time.
We should say a w o r d about g e n e r a l i z a t i o n s here.
We have t r e a t e d only the
simplest settings for the sake of clarity, but there is s u r p r i s i n g l y little change if we m o v e to m o r e complex systems. b r a n c h i n g diffusions,
We can replace the B r o w n i a n p a r t i c l e s b y
or even b r a n c h i n g Hunt processes,
c h a n g i n g the character of the l i m i t i n g process. Paris,
1984]).
for instance, w i t h o u t
(RoellyCopoletta
One can treat more general b r a n c h i n g schemes.
[Thesis, U. of
If the family size N
430
has a finite
variance,
Gorostitza
the form at ~D = ~I ~D + ~ new effect
+ ~Z + yVoW,
is to a d d a growth
term,
If E { N 2} = ~, however, in certain
cases w h e n
[26] has shown that one gets where
things
~/k has a finite
"random
field"
has b e e n u s e d to cover a l m o s t oneparameter
processes,
limit.
It seems
rate,
is s o m e t h i n g
this
chapter
of a mystery.
is about
spaces,
pressure
potential
theory
and get n > k + d/2 rather
certain
for e l l i p t i c
for e l l i p t i c
than p a r a b o l i c
a p p l y to p a r a b o l i c
systems,
or h y p e r b o l i c
for that matter).
At any
here.
Frankly,
we w e r e under For s o b o l e v
[67]
of m e a s u r e s
can be found in Doob
in P r o p o s i t i o n
and Hormander
[30].
7.1 can doubtless
the S o b o l e v
embedding
[66]. be
in the p r o o f
introduced
by N e l s o n
in
the q u a n t u m
[48].
He p r o v e d
the sharp
field which d e s c r i b e s
He also s h o w e d that it can be m o d i f i e d
to describe
systems. book
[54] is a g o o d reference
systems
in which,
Markov property
it
 a n d some
see F o l l a n d
one can bypass
for a s t r o n g M a r k o v property,
that L~vy's
than one p a r a m e t e r
than n > k + d.
particles.
Rozanov's
finds
machinery
space
a n d u s e d it to c o n s t r u c t
interacting
Evstigneev
At one time or another,
to be u s e d p a r t i c u l a r l y
and the energy
noise,
The free field was
noninteracting
word.
fields.
n of the S o b o l e v
If M is a w h i t e
M a r k o v property,
study.
NINE
[64]; for the P D E theorems,
The exponent improved.
further
(As is the t e r m itself,
random
~ can t e n d to zero
and d i d n ' t have time to work out an e a s i e r approach.
see A d a m s
The c l a s s i c a l
needs
having more
W e h a v e u s e d some h e a v y t e c h n i c a l deadline
For example,
This
is a p o r t m a n t e a u
t h o u g h w h y it s h o u l d be u s e d more often systems
~ = 0 if E { N  1} = 0, so that the o n l y
do change.
any p r o c e s s
too.
of
~.
CHAPTER
The t e r m
limiting equations
and K u s u o k a
contrary
holds
for L~vy's M a r k o v property. [43] for results
to the claim in
See
which also
[57], one c o m m o n l y
but the sharp M a r k o v p r o p e r t y
does not.
431
C H A P T E R TEN
There is no Chapter Ten in these notes. s t o p p e d us f r o m h a v i n g notes on C h a p t e r Ten. remarks w h i c h didn't fit in elsewhere.
For some reason that h a s n ' t
W e w i l l use this space to collect some
Since the chapter under d i s c u s s i o n doesn't
exist, no one can accuse us of digressing. We did not have a chance to discuss equations relative to m a r t i n g a l e m e a s u r e s w i t h a nuclear covariance.
These can arise when the u n d e r l y i n g noise is
smoother than a white noise or, as often happens, a p p r o x i m a t e d by a smoothed out version.
it is w h i t e noise w h i c h one has
If one thinks of a white noise, as we did in
the introduction, as c o m i n g from s t o r m  d r i v e n grains of sand b o m b a r d i n g a guitar string, one m i g h t think of nuclear covariance noise as coming from a s t o r m of p i n g  p o n g balls.
The solutions of such systems tend to be b e t t e r  b e h a v e d ,
particular, they often give f u n c t i o n solutions rather than distributions. it p o s s i b l e to t r e a t n o n  l i n e a r equations,
equations are u s u a l l y t r e a t e d in a H i l b e r t  s p a c e setting. [11], Da Prato
[12], and Ichikawa
This m a k e s
s o m e t h i n g rather a w k w a r d to do o t h e r w i s e
(how does one take a n o n  l i n e a r function of a distribution?)
and Falb
and in
Mathematically,
these
See for instance Curtain
[68].
There have been a variety of a p p r o a c h e s d e v i s e d to cope with SPDE's driven b y white noise and related processes.
See Kuo
b a s e d on the theory of a b s t r a c t Wiener spaces.
[41] and Dawson
The latter p a p e r reviews the s u b j e c t
of SPDE's u p to 1975 and has e x t e n s i v e references. and Karandikar measures.
[13] for a treatment
Balakrishnan
[3] and K a l l i a n p u r
[35] have used c y l i n d r i c a l B r o w n i a n motions and finitely additive
See also M 6 t i v i e r and P e l l a u m a i l
[46], w h i c h gives an account of the
i n t e g r a t i o n theory of c y l i n d r i c a l processes. o r t h o g o n a l m a r t i n g a l e measures.
G i h m a n and Skorohod
See also W a t k i n s
[61].
Ustunel
[25] i n t r o d u c e d [55] has s t u d i e d
nuclear space v a l u e d s e m i  m a r t i n g a l e s with applications to SPDE's and s t o c h a s t i c flows.
The m a r t i n g a l e p r o b l e m m e t h o d can be a d a p t e d to SPDE'S as w e l l as o r d i n a r y
SDE's.
It has had succes in h a n d l i n g n o n  l i n e a r equations i n t r a c t a b l e to other
methods.
See Dawson
the linear case.
[65] and F l e m i n g and Viot
[20], and Holley and Stroock
[29] for
432
Another type of equation which has generated considerable research is the SPDE driven by a single oneparameter Brownian motion.
(One could get such an
equation from (5.1) by letting T be an integral operator rather than a differential operator.) theory.
An example of this is the Zakai equation which arises in filtering
See Pardoux
[51] and Krylov and Rosovski
[39].
Let us finish by mentioning a few more subjects which might interest the reader: fluid flow and the stochastic NavierStokes equation Temam
(e.g. Bensoussan and
[4] ); measurevalued diffusions and their application to population growth
(Dawson
[65], Fleming and Viot
(Kotelenz
[20]); reaction diffusion equations in chemistry
[38]) and quantum fields (Wolpert
[70] and Dynkin
[16])0
REFERENCES [I]
AIdous,
D., Stopping times and tightness, Ann. Prob. 6 (1978), 335340.
[2]
Bakry, D., Semi martingales a deux indices, Sem. de Prob. XV, Lecture Notes in Math 850, 671672.
[3]
Balakrishnan,
A. V., Stochastic bilinear partial differential equations, in
Variable Structure Systems, Lecture Notes in Economics and Mathematical Systems 3, Springer Verlag, [4]
Bensoussan,
1975.
A. and Temam, R., Equations stochastiques du type NavierStokes,
Fcl. Anal.
J.
13 (1973), 195222.
[5]
Billingsley,
P., Convergence of Probability Measures, Wiley, New York,
1968.
[6]
Brennan, M. D., Planar semimartingales, J. Mult. Anal. 9 (1979), 465486.
[7]
Burkholder,
D. L., Distribution function inequalities for martingales, Ann.
Prob. I (1973),
1942.
[8]
Caba~a, E., On barrier problems for the vibrating string, ZW 22 (1972),
[9]
Cairoli, R., Sur une equation differentielle
stochastique,
1324.
C.R. 274 (1972),
17381742. [10]
Cairoli, R. and Walsh, J. B., Stochastic integrals in the plane, Acta Math 134 (1975),
[11]
111183.
Curtain, R. F. and Falb, P. Lo, Stochastic differential equations in Hilbert spaces, J. Diff. Eq.
[12]
10 (1971), 434448.
Da Prato, G., Regularity results of a convolution stochastic integral and applications to parabolic stochastic equations in a Hilbert space (Preprint).
[13]
Dawson, D., Stochastic evolution equations and related measure processes, Mult. Anal. 5 (1975),
J.
152.
[14]
Dawson, D., Stochastic evolution equations, Math Biosciences
15, 287316.
[15]
Dawson, D. and Hochberg, K. J., The carrying dimension of a Stochastic measure diffusion. Ann. Prob. 7 (1979).
[16]
Dynkin, E. B., Gaussian and nonGaussian random fields associated with Markov processes,
J. Fcl. Anal. 55 (1984), 344376.
434
[17]
Dynkin, E. B. and Mandelbaum A., Symmetric statistics,
Poisson point processes,
and multiple Wiener integrals, Ann. Math. Stat 11 (1983), 739745. [18]
Evstigneev,
I. V., Markov times for random fields, Theor. Prob. Appl. 22
(1978), 563569. [19]
Faris, W. G., and JonaLasinio, G., Large fluctuations for a nonlinear heat equation with white noise, J. Phys. A: Math, Gen. 15 (1982), 30253055.
[20]
Fleming, W. and Viot, M., Some measurevalued Markov processes in population genetics theory,
[21]
Indiana Univ. Journal 28 (1979), 817843.
Fouque, JP., La convergence en loi pour les processus a valeurs dans un ~space nucleaire, Ann. IHP 20 (1984), 225245.
[22]
Garsia, A., Continuity properties of Gaussian processes with multidimensional time parameter, Proc. 6th Berkeley Symposium, V. II, 369374.
[23]
Garsia, A., Rodemich,
G., and Rumsey, H. Jr., A real variable lemma and the
continuity of paths of some Gaussian processes,
Indiana U. Math. J. 20
(1970), 565578. [24]
Gelfand, I. M. and Vilenkin, N. Ya., Generalized Functions, V.4, Academic Press, New YorkLondon
[25]
Gihman, I. I., and Skorohod, A. V., The Theory of Stochastic Processes, III, SpringerVerlag Berlin
[26]
1964.
(1979).
Gorostitza, L., High density limit theorems for infinite systems of unscaled branching Brownian motions, Prob.
[27]
Greiner,
Ann.
163218.
Harris, T. E., The Theory of Branching Processes, PrenticeHall, Cliffs, N.J.,
[29]
11 (1983), 374392; Correction,
P., An asymptotic expansion for the heat equation, Arch. Ratl. Mech.
Anal. 41 (1971), [28]
Ann. Prob.
12 (1984), 926927.
Englewood
1963.
Holley, R. and Stoock, D., Generalized OrnsteinUhlenbeck processes and infinite particle branching Brownian motions,
Publ. RIMS Kyoto Univ.
14
(1978), 741788. [30]
Hormander, L. Linear Partial Differential Operators, Heidelberg,
New York,
1963.
Springer Verlag, Berlin,
435
[31]
Huang, Zhiyuan,
Stochastic
integrals on general topological
measurable
spaces,
Z.W. 66 (1984), 2540. [32]
Ikeda, N., Nagasawa,
M., and Watanabe,
and III, J. Math. Kyoto.
S., Branching Markov processes,
Univ. 8 (1968) 233278,
I, II
365410; 9 (1969),
951t0. [33]
Ito, K., Stochastic
anaysis in infinite dimensions;
Friedman and M. Pinsky, [34]
[35]
Kallianpur,
Kallianpur,
G. and Karandikar, filtering,
G. and Wolpert,
1980,
processes arising from independent
Math Z. 182 (i983)
to nonlinear [36]
eds.) Academic Press, New York,
It