ebook img

An invitation to sample paths of Brownian motion PDF

68 Pages·2001·0.528 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview An invitation to sample paths of Brownian motion

An Invitation to Sample Paths of Brownian Motion Yuval Peres Lecture notes edited by Ba(cid:19)lint Vira(cid:19)g, Elchanan Mossel and Yimin Xiao Version of November 1, 2001 1 2 Preface. These notes record lectures I gave at the Statistics Department, University of California,Berkeley in Spring 1998. I am grateful to the students who attended the course and wrote the (cid:12)rst draft of the notes: Diego Garcia, Yoram Gat, Diogo A. Gomes, Charles Holton, Fr(cid:19)ed(cid:19)eric Latr(cid:19)emoli(cid:18)ere, Wei Li, Ben Morris, Jason Schweinsberg, B(cid:19)alint Vira(cid:19)g, Ye Xia and Xiaowen Zhou. The draft was edited by Ba(cid:19)lint Vira(cid:19)g, Elchanan Mossel, Serban Nacu and Yimin Xiao. I thank Pertti Mattila for the invitation to lecture on this material at the joint summer school in Jyva¨skyla, August 1999. Contents Chapter 1. Brownian Motion 1 1. Motivation { Intersection of Brownian paths 1 2. Gaussian random variables 1 3. L(cid:19)evy’s construction of Brownian motion 4 4. Basic properties of Brownian motion 6 5. Hausdor(cid:11) dimension and Minkowski dimension 11 6. Hausdor(cid:11) dimension of the Brownian path and the Brownian graph 13 7. On nowhere di(cid:11)erentiability 17 8. Strong Markov property and the reflection principle 18 9. Local extrema of Brownian motion 20 10. Area of planar Brownian motion paths 20 11. Zeros of the Brownian motion 22 12. Harris’ Inequality and its consequences 27 13. Points of increase for random walks and Brownian motion 29 14. Frostman’s Lemma, energy, and dimension doubling 33 15. Skorokhod’s representation 39 16. Donsker’s Invariance Principle 43 17. Harmonic functions and Brownian motion in Rd 47 18. Maximum principle for harmonic functions 50 19. The Dirichlet problem 51 20. Polar points and recurrence 52 21. Capacity and harmonic functions 54 22. Kaufman’s theorem on uniform dimension doubling 57 23. Packing dimension 59 24. Hausdor(cid:11) dimension and random sets 60 25. An extension of L(cid:19)evy’s modulus of continuity 61 References 63 3 CHAPTER 1 Brownian Motion 1. Motivation { Intersection of Brownian paths Consider a number of Brownian motionpaths started at di(cid:11)erent points. Say that they intersect if there is a point which lies on all of the paths. Do the paths intersect? The answer to this question depends on the dimension: (cid:15) In R2, any (cid:12)nite number of paths intersect with positive probability (this is a theorem of Dvoretsky, Erdo}s, Kakutani in the 1950’s), (cid:15) InR3,twopathsintersectwithpositiveprobability,butnotthree(thisisatheorem of Dvoretsky, Erdo}s, Kakutani and Taylor, 1957), (cid:15) In Rd for d (cid:21) 4, no pair of paths intersect with positive probability. The principle we will use to establish these results is intersection equivalence between Brownian motionand certain random Cantor-type sets. Here we willintroduce the concept for R3 only. Partition the cube [0;1]3 in eight congruent sub-cubes, and keep each of the sub-cubes withprobability 1. Foreachcube thatremainedattheendofthisstage,partition 2 (cid:0) (cid:1) it into eight sub-cubes, keeping each of them with probability 1, and so on. Let Q 3;1 2 2 denote the limiting set| that is, the intersection of the cubes remaining at all steps. This set is not empty with positive probability, since, if we consider that the remaining sub- cubes of a given cube as its \children" in a branching process, then the expected number of o(cid:11)springs is four, so this process has positive probability not to die out. One can prove that, there exist two positive constants C ;C such that, if (cid:3) is a closed 1 2 subset of [0;1]3, and fBtg is a Brownian motion started at a point uniformly chosen in [0;1]3, then: (cid:18) (cid:18) (cid:19) (cid:19) (cid:18) (cid:18) (cid:19) (cid:19) 1 1 C1P Q 3; \(cid:3) 6= ; (cid:20) P(9t (cid:21) 0 Bt 2 (cid:3))(cid:20) C2P Q 3; \(cid:3)6= ; 2 2 The motivation is that, though the intersection of two indep(cid:0)ende(cid:1)nt Brownian paths is a complicated object, the intersection of two sets of the form Q 3;1 is a set of the same (cid:0) (cid:1) 2 kind|namely, Q 3;1 . The previously described branching process dies out as soon as we 4 intersect more than two of these Cantor-type sets|hence the result about intersection of paths in R3. 2. Gaussian random variables Brownian motionis at the meeting point of the most important categories of stochastic processes: it is a martingale, a strong Markov process, a process with independent and stationary increments, and a Gaussian process. We will construct Brownian motion as a speci(cid:12)c Gaussian process. We start with the de(cid:12)nitions of Gaussian random variables: 1 2 1. BROWNIAN MOTION Definition 2.1. A real-valued random variable X on a probabilityspace (Ω;F;P)has a standard Gaussian (or standard normal) distribution if Z P(X > x)= p1 +1e−u2=2du 2(cid:25) x A vector-valued random variable X has an n-dimensional standard Gaussian dis- tribution if its n coordinates are standard Gaussian and independent. A vector-valued random variable Y : Ω ! Rp is Gaussian if there exists a vector- valued random variableX havingan n-dimensional standardGaussian distribution,a p(cid:2)n matrix A and a p-dimensional vector b such that: Y = AX +b (2.1) We are now ready to de(cid:12)ne the Gaussian processes. Definition 2.2. A stochastic process (Xt)t2I is said to be a Gaussian process if for all k and t1;:::;tk 2 I the vector (Xt1;:::;Xtk)t is Gaussian. Recall that the covariance matrix of a random vector is de(cid:12)ned as (cid:2) (cid:3) Cov(Y)= E (Y −EY)(Y −EY)t Then, by the linearity of expectation, the Gaussian vector Y in (2.1) has t Cov(Y)= AA : Recall that an n(cid:2)n matrix A is said to be orthogonal if AAt = In. The followinglemma shows that the distribution of a Gaussian vector is determined by its mean and covariance. Lemma 2.3. (i) If (cid:2) is an orthogonal n(cid:2)n matrix and X is an n-dimensional standard Gaussian vector, then (cid:2)X is also an n-dimensional standard Gaussian vector. (ii) If Y and Z are Gaussian vectors in Rn such that EY = EZ and Cov(Y) = Cov(Z), then Y and Z have the same distribution. Proof. (i) As the coordinates of X are independent standard Gaussian, X has density given by: f(x)= (2(cid:25))−n2e−kxk2=2 where k(cid:1)k denotes the Euclidean norm. Since (cid:2) preserves this norm, the density of X is invariant under (cid:2). (ii) It is su(cid:14)cient to consider the case when EY = EZ = 0. Then, using De(cid:12)nition 2.1, there exist standard Gaussian vectors X , X and matrices A;C so that 1 2 Y = AX and Z = CX : 1 2 By adding some columns of zeroes to A or C if necessary, we can assume that X , X are 1 2 both k-vectors for some k and A, C are both n(cid:2)k matrices. LetA, C denote thevectorspacesgeneratedbytherowvectorsofAandC, respectively. To simplify notations, assume without loss of generality that the (cid:12)rst ‘ row vectors of A 2. GAUSSIAN RANDOM VARIABLES 3 form a basis for the space A. For any matrix M let Mi denote the ith row vector of M, and de(cid:12)ne the linear map (cid:2) from A to C by Ai(cid:2) =Ci for i= 1;:::;‘. We wanttocheck that(cid:2) isanisomorphism. Assume thatthereisavectorv1A1+(cid:1)(cid:1)(cid:1)+v‘A‘ t whose image is 0. Then the k-vector v = (v1;v2;:::;v‘;0;:::;0) satis(cid:12)es v C = 0, and so kvtAk2 = vtAAtv = vtCCtv = 0, giving vtA = 0. This shows that (cid:2) is one-to-one, in particular dimA (cid:20) dimC. By symmetry A and C must have the same dimension, so (cid:2) is an isomorphism. t Asthecoe(cid:14)cient(i;j)ofthematrixAA isthescalarproductofAi andAj,theidentity AAt = CCt implies that (cid:2) is an orthogonal transformation from A to C. We can extend it to map the orthocomplement of A to the orthocomplement of C orthogonally, getting an orthogonal map (cid:2) : Rk ! Rk. Then Y = AX ; Z = CX = A(cid:2)X ; 1 2 2 and (ii) follows from (i). Thus, the (cid:12)rsttwomomentsofa Gaussianvectorare su(cid:14)cientto characterizeitsdistri- bution, hence the introductionofthe notationN((cid:22);(cid:6))todesignate the normaldistribution with expectation (cid:22) and covariance matrix (cid:6). A useful corollary of this lemma is: Corollary 2.4. Let Z ;Z be independent N(0;(cid:27)2) random variables. Then Z +Z 1 2 1 2 and Z −Z are two independent random variables having the same distribution N(0;2(cid:27)2). 1 2 Proof. (cid:27)−1(Z ;Z ) is a standard Gaussian vector, and so, if: 1 2 (cid:20) (cid:21) 1 1 1 (cid:2) = p 2 1 −1 then (cid:2) is an orthogonal matrix such that p ( 2(cid:27))−1(Z +Z ;Z −Z )t = (cid:2)(cid:27)−1(Z ;Z )t; 1 2 1 2 1 2 and our claim follows from part (i) of the Lemma. As a conclusion of this section, we state the following tail estimate for the standard Gaussian distribution: Lemma 2.5. Let Z be distributed as N(0;1). Then for all x(cid:21) 0: x p1 e−x2=2 (cid:20) P(Z > x)(cid:20) 1p1 e−x2=2 x2+1 2(cid:25) x 2(cid:25) Proof. The right inequality is obtained by the estimate: Z P(Z > x) (cid:20) +1 up1 e−u2=2du x x 2(cid:25) since, in the integral, u (cid:21) x. The left inequality is proved as follows: Let us de(cid:12)ne Z +1 f(x):=xe−x2=2−(x2+1) e−u2=2du x 4 1. BROWNIAN MOTION We remark that f(0)< 0 and limx!+1f(x)= 0: Moreover, Z +1 f0(x) = (1−x2+x2+1)e−x2=2−2x e−u2=2du (cid:18)Z x(cid:19) = −2x +1e−u2=2du− 1e−x2=2 ; x x so the right inequality implies f0(x) (cid:21) 0 for all x (cid:21) 0. This implies f(x) (cid:20) 0, proving the lemma. 3. L(cid:19)evy’s construction of Brownian motion 3.1. De(cid:12)nition. Standard Brownian motion on an interval I = [0;a] or I = [0;1) is de(cid:12)ned by the following properties: Definition 3.1. A real-valued stochastic process fBtgt2I is a standard Brownian motion if it is a Gaussian process such that: (i) B = 0, 0 (ii) 8k natural and 8t1 < :::< tk in I: Btk−Btk−1;:::;Bt2−Bt1 are independent, (iii) 8t;s2 I with t < s Bs −Bt has N(0;s−t) distribution. (iv) Almost surely, t 7! Bt is continuous on I. As a corollary of this de(cid:12)nition, one can already remark that for all t;s 2 I: Cov(Bt;Bs)= s^t: Indeed, assume that t (cid:21) s. Then Cov(Bt;Bs) = Cov(Bt − Bs;Bs) + Cov(Bs;Bs) by bilinearity of the covariance. The (cid:12)rst term vanishes by the independence of increments, andthe secondtermequalssbyproperties(iii)and(i). ThusbyLemma2.3wemayreplace properties (ii) and (iii) in the de(cid:12)nition by: (cid:15) For all t;s 2I, Cov(Bt;Bs)= t^s. (cid:15) For all t 2 I, Bt has N(0;t) distribution. or by: (cid:15) For all t;s 2I with t < s, Bt−Bs and Bs are independent. (cid:15) For all t 2 I, Bt has N(0;t) distribution. Kolmogorov’s extension theorem implies the existence of any countable time set sto- chastic process fXtg if we know its (cid:12)nite-dimensional distributionsand they are consistent. Thus, standard Brownian motion could be easily constructed on any countable time set. Howeverknowing(cid:12)nitedimensionaldistributionsisnotsu(cid:14)cienttogetcontinuouspaths, as the following example shows. Example 3.2. Suppose that standard Brownian motion fBtg on [0;1] has been con- structed, and consider an independent random variable U uniformly distributed on [0;1]. De(cid:12)ne: (cid:26) B~t = Bt if t 6= U 0 otherwise The (cid:12)nite-dimensionaldistributionsof fB~tgare the same asthe ones of fBtg. However,the process fB~tg has almost surely discontinuous paths. 3. LE(cid:19)VY’S CONSTRUCTION OF BROWNIAN MOTION 5 In measure theory, one often identi(cid:12)es functions withtheir equivalence classfor almost- everywhere equality. As the above example shows, it is important not to make this iden- ti(cid:12)cation in the study of continuous-time stochastic processes. Here we want to de(cid:12)ne a probability measure on the set of continuous functions. 3.2. Construction. The followingconstruction, due to Paul L(cid:19)evy, consist of choosing the \right" values for the Brownian motion at each dyadic point of [0;1] and then inter- polating linearly between these values. This construction is inductive, and, at each step, a process is constructed, that has continuous paths. Brownian motion is then the uniform limit of these processes|hence its continuity. We will use the following basic lemma. The proof can be found, for instance, in Durrett (1995). Lemma 3.3 (Borel-Cantelli). Let fAigi=0;:::;1 be a sequence of events, and let \1 [1 fAi i.o.g= limsupAi = Aj; i!1 i=0j=i where \i.oP." abbreviates \in(cid:12)nitely often". ((ii)i)IIff fA1i=ig0Par(eApi)ai<rw1ise, tihnednepPen(Adeinit.o,.a)n=d 0P. 1i=0P(Ai) =1, then P(Ai i.o.) = 1. Theorem 3.4 (Wiener 1923). Standard Brownian motion on [0;1) exists. Proof. (L(cid:19)evy 1948) We (cid:12)rst construct sStandard Brownian motion on [0;1]. For n (cid:21)0, let Dn = fk=2n :0 (cid:20) k (cid:20) 2ng, and let D = Dn. Let fZdgd2D be a collection of independent N(0;1) random variables. We will (cid:12)rst construct the values of B on D. Set B = 0, and B = Z . In an 0 1 1 inductive construction, for each n we will construct Bd for all d2 Dn so that (i) For all r < s < t in Dn, the increment Bt −Bs has N(0;t−s) distribution and is independent of Bs −Br. (ii) Bd for d 2 Dn are globally independent of the Zd for d2 DnDn. These assertions hold for n = 0. Suppose that they hold for n − 1. De(cid:12)ne, for all d2 DnnDn−1, a random variable Bd by Bd− +Bd+ Zd Bd = + (3.1) 2 2(n+1)=2 where d+ = d+ 2−n, and d− = d− 2−n, and both are in Dn−1. Since 12[Bd+ − Bd−] is N(0;1=2n+1) by induction, and Zd=2(n+1)=2 is an independent N(0;1=2n+1), their sum and their di(cid:11)erence, Bd−Bd− and Bd+−Bd are both N(0;1=2n) and independent by Corollary 2.4. Assertion (i) follows from this and the inductive hypothesis, and (ii) is clear. Havingthuschosen the values ofthe process on D, wenow\interpolate"between them. Formally, let F (x)= xZ , and for n (cid:21) 1, let let us introduce the function: 0 1 8 < 2−(n+1)=2Zx for x 2 DnnDn−1; Fn(x)= : 0 for x 2 Dn−1; (3.2) linear between consecutive points in Dn: 6 1. BROWNIAN MOTION These functions are continuous on [0;1], and for all n and d2 Dn Xn X1 Bd = Fi(d)= Fi(d): (3.3) i=0 i=0 This can be seen by induction. Suppose that it holds for n−1. Let d 2 Dn−Dn−1. Since for 0(cid:20) i(cid:20) n−1 Fi is linear on [d−;d+], we get nX−1 nX−1 Fi(d−)+Fi(d+) Bd− +Bd+ Fi(d)= = : (3.4) 2 2 i=0 i=1 Since Fn(d)= 2−(n+1)=2Zd, comparing (3.1) and (3.4) gives (3.3). On the other hand, we have, by de(cid:12)nition of Zd and by Lemma 2.5: (cid:18) (cid:19) (cid:0) p (cid:1) c2n P jZdj (cid:21) c n (cid:20) exp − 2 P P p fpor n large enough, so the series 1n=0 d2DnP(jZdj (cid:21) c n) converges as soon as c > 2log2. Fix such a c. By the Borel-Cantelli Lemma 3.3 we conclude pthat there exists a random but (cid:12)nite N so that for all n > N and d2 Dn we have jZdj< c n, and so: p kFnk1 < c n2−n=2: (3.5) P 1 This upper bound implies that the series n=0Fn(t) is uniformly convergent on [0;1], and so it has a continuous limit, which we call fBtg. All we have to check is that the increments of this process have the right (cid:12)nite-dimensional joint distributions. This is a direct consequence of the density of the set D in [0;1] and the continuity of paths. Indeed, let t1 > t2 > t3 be in [0;1], then they are limits of sequences t1;n, t2;n and t3;n in D, respectively. Now Bt −Bt = lim(Bt −Bt ) 3 2 k!1 3;k 2;k is a limit of Gaussian random variables, so itself is Gaussian with mean 0 and vari- ance limn!1(t3;n −t2;n) = t3 − t2. The same holds for Bt2 − Bt1, moreover, these two random variables are limit of independent random variables, since for n large enough, t1;n > t2;n > t3;n. Applying this argument for any number of increments, we get that fBtg has independent increments such thatand forall s< t in [0;1]Bt−Bs has N(0;t−s) distribution. We have thus constructed Brownian motion on [0;1]. To conclude, if fBtngt for n (cid:21) 0 are independent Brownian motions on [0;1], then X btc i Bt = Bt−btc+ B1 0(cid:20)i<btc meets our de(cid:12)nition of Brownian motion on [0;1). 4. Basic properties of Brownian motion Let fB(t)gt(cid:21)0 be a standard Brownian motion, and let a 6= 0. The following scaling relation is a simple consequence of the de(cid:12)nitions. fa1B(a2t)gt(cid:21)0 =d fB(t)gt(cid:21)0: 4. BASIC PROPERTIES OF BROWNIAN MOTION 7 Also, de(cid:12)ne the time inversion of fBtg as (cid:26) 0 t = 0; W(t) = tB(1) t > 0. t We claim that W is a standard Brownian motion. Indeed, 1 1 1 1 Cov(W(t);W(s))= tsCov(B( ; ))= ts ( ^ ) = t^s; t s t s so W and B have the same (cid:12)nite dimensional distributions, and they have the same dis- tributions as processes on the rational numbers. Since the paths of W(t) are continuous except maybe at 0, we have limW(t) = lim W(t) = 0 a.s. t#0 t#0;t2Q so the paths of W(t) are continuous on [0;1) a.s. As a corollary, we get Corollary 4.1. [Law of Large Numbers for Brownian motion] B(t) lim = 0 a:s: t!1 t Proof. limt!1 Bt(t) = limt!1W(1t)= 0 a.s. Exercise 4.2. Prove this result directly. Use the usual Law of Large Numbers to show that limn!1 Bn(n) = 0. Then show that B(t) does not oscillate too much between n and n+1. Remark. The symmetry inherent in the time inversion property becomes more appar- ent if one considers the Ornstein-Uhlenbeck di(cid:11)usion, which is given by X(t)= e−tB(e2t) This is a stationary Markov chain with X(t) (cid:24) N(0;1) for all t. It is a di(cid:11)usion with a drift toward the origin proportional to the distance from the origin. Unlike Brownian motion, the Ornstein-Uhlenbeck di(cid:11)usion is time reversible. The time inversion formula gives fX(t)gt(cid:21)0 =d fX(−t)gt(cid:21)0. For t near −1, X(t) relates to the Brownian motion near 0, and for t near 1, X(t) relates to the Brownian motion near 1. One of the advantages of L(cid:19)evy’sconstruction ofBrownian motionis thatit easilyyields a modPulus of continuity result. Following L(cid:19)evy, we de(cid:12)ned Brownian motion as an in(cid:12)nite 1 sum n=0Fn, where each Fn is a piecewise linear function given in (3.2). Its derivative exists almost everywhere, and by de(cid:12)nition and (3.5) kFn0k1 (cid:20) kF2n−kn1 (cid:20) C1(!)+pn 2n=2 (4.1) TherandomconstantC (!)isintroducedtodealwiththe (cid:12)nitelymanyexceptionsto(3.5). 1 Now for t;t+h 2[0;1], we have X X X jB(t+h)−B(t)j (cid:20) jFn(t+h)−Fn(t)j(cid:20) hkFn0k1+ 2kFnk1 (4.2) n n(cid:20)‘ n>‘

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.