ebook img

DISCRETE MARTINGALES AND APPLICATIONS TO ANAL SIS The use of probabilistic ... PDF

40 Pages·2003·0.51 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DISCRETE MARTINGALES AND APPLICATIONS TO ANAL SIS The use of probabilistic ...

DISCRETE MARTINGALES AND APPLICATIONS TO ANALYSIS JOSE(cid:19) G. LLORENTE The use of probabilistic techniques in Analysis has experienced a re- markablesuccessinthelast(cid:12)fteenyears. Itturnsout,forinstance,that theboundarybehaviourofharmonicfunctionsandconformalmappings is better understood when rephrased in discrete terms. In a series of in(cid:13)uential papers, N.G. Makarov ([Mak89a], [Mak89b], [Mak85], [Mak87]) proved a number of deep results on the boundary behaviour of conformal maps, many of them being direct consequences of properties of the asymptotic behaviour of discrete martingalesin the unit interval. Still in the eighties, [CWW85], [BKM88], [BM89], [BKM90] used dyadic martingales, in more or less direct ways, to prove results on the boundary behaviourofharmonicfunctionsintheupper half-spaceorin more general domains. Since then, and specially in the last ten years, dyadic martingales have shown to be an illuminating tool not only in boundary behaviour (see [BFL00], [Can98], [Don01b], [DP99], [GN01] and [Llo98]) but also in the study of Zygmund measures and Zygmund functions ([CD96], [Don01a]), regularity of measures ([GN01], [Llo98], [Llo02]) and, surprisingly, hyperbolic manifolds and Kleinian groups ([BJ95], [BJ97]). Roughlyspeaking, it can be said that the mainreason why the inter- play\continuous{discrete" isfruitfulrelies onthe meanvalueproperty. It is well known that harmonic functions are those continuous func- tions that satisfy a spherical mean value property. Now, if we consider a dyadic directed tree then, harmonic functions on the tree { that is, functions satisfying a one-sided mean value property { are canonically identi(cid:12)ed with dyadic martingales in [0;1] (see Chapter 1 for details). Itisthen naturaltoexpect somesort ofparallelismbetween thebound- ary behaviour of harmonic functions and the asymptotic behaviour of dyadic martingales. Through Chapters 1 and 2 we will see that this is indeed the case, and sometimes the parallelism is quite direct and satisfactory. The author was supported in part by funds of European Erasmus program and a grant of Ministerio de Educaci(cid:19)on, Spain. 1 2 JOSE(cid:19) G. LLORENTE The structure of the notes is the following: Contents 1. Dyadic martingales 3 1.1. Conditional expectation 3 1.2. Martingales and dyadic martingales 4 1.3. Quadratic characteristic 7 1.4. Stopping times 9 1.5. The Basic Convergence Theorem 10 1.6. Limit Theorems vs. Quadratic Characteristic 13 1.7. The Law of the Iterated Logarithm (LIL) 15 2. Dyadic martingales, Bloch functions and conformal mappings 17 2.1. Bloch functions and conformal mappings 17 2.2. Applications to the boundary behaviour of Bloch functions and conformal mappings 19 3. Exceptional sets, martingales and measures 22 3.1. Hausdor(cid:11) measures 22 3.2. Exceptional sets 23 3.3. Comparison of measures 25 3.4. On the size of harmonic measure 27 3.5. Regularity of measures in terms of their doubling properties 30 References 37 In Chapter 1, I have intended to give a brief introduction to dyadic martingales,withspecialemphasisintheirasymptoticproperties. Since some of the applications in Chapter 2 and Chapter 3 depend on the upper bound of the Law of the Iterated Logarithm, I have included a detailed proof in the simplest case. Through Chapter 1, I have tried to point out the parallelisms (most of the times only heuristic but illumi- nating) between the discrete and the continuous setting. The material is basically self-contained, though sometimes I have preferred to skip a proof and to give instead a reference. The general approach of the notes owes much to [Mak89a]. [Shi84] is an exceptional general ref- erence for the probabilistic part, and [Sto84] is also a good reference for martingales and limit theorems. Part of the material included in Chapters 2and 3 can alsobe found inthe forthcoming[GM]. [BM99]is a monograph containing advanced topics on martingales and boundary behaviour. DISCRETE MARTINGALES AND APPLICATIONS TO ANALYSIS 3 These notes originated in a series of lectures that I gave at the University of Jyv(cid:127)askyla(cid:127) in June 2001, as part of the Erasmus ex- change program between the University of Jyv(cid:127)askyl(cid:127)a and the Universi- tat Aut(cid:18)onoma de Barcelona. It is a pleasure for me to thank professor PekkaKoskelaforhisinvitationandhisencouragementtopreparethese notes, during and after my visit. My thanks go also to the Department of Mathematics of the University of Jyv(cid:127)askyl(cid:127)a for its hospitality, to professor Stefan Geiss, who took care of the organization of the course and also to all who participated in the lectures. The notes were written when I was visiting the University of Michi- gan during the academic course 2001-2002, funded by a grant of the Ministeriode Educaci(cid:19)on of Spain. The author thanks both institutions for their support. Some comments about notation: We refer to [Shi84] for the basicfactsandnotationaboutprobabilityspaces. Itwillbeunderstood that any random variable X : (cid:10) R on a probability space ((cid:10); ;(cid:22)) 1 ! G belongs to L ((cid:22)). stands for Lebesgue measure in [0;1) or @D. j(cid:1)j 1. Dyadic martingales 1.1. Conditional expectation. Let ((cid:10); ;(cid:22)) be a probability space, G X : (cid:10) R a random variable and a (cid:27)-algebra. We say that ! F (cid:26) G the random variable Y is the conditional expectation of X with respect to (denoted Y = E[X= ]) if: F F (i) Y is -measurable, F (ii) Xd(cid:22) = Yd(cid:22) for any F . F F 2 F From thRe point ofRview of gambling, the whole (cid:27)-algebra can be seen G as the total potential information of the gambler, whereas can be F seen as the current real information. Then E[X= ] intuitively rep- F resents the \correction" of X provided the accessible information . F The extreme cases where X is -measurable and X is independent of F (that is, X and 1F are independent for any F ) correspond, F 2 F respectively to the cases where the gambler has all information about X or has no a priori information about X. We list below the basic properties of conditional expectation: d(cid:22)1 (1) If (cid:22)1 = F Xd(cid:22) and (cid:22)2 = (cid:22) , then E[X= ] = d(cid:22)2 (Radon{ jF F Nikodym derivative). In particular, the conditionalexpectation R is well de(cid:12)ned and unique. (2) If X is -measurable(perfect information),then E[X= ] = X. F F (3) IfX isindependentof (noaprioriinformation),thenE[X= ] = F F X d(cid:22). (cid:10) R 4 JOSE(cid:19) G. LLORENTE N (4) If (cid:10)k k=1 is a partition of (cid:10) and is the (cid:27)-algebra generated f g F by (cid:10)1; ;(cid:10)N , then f (cid:1)(cid:1)(cid:1) g N E[X= ] = Xd(cid:22) 1(cid:10)k; F k=1(cid:18)Z(cid:0)(cid:10)k (cid:19) X 1 where (cid:0) Xd(cid:22) denotes, hereafter, Xd(cid:22). A (cid:22)(A) A (5) If X and Z are random variables and Z is -measurable, then R R F E[XZ= ] = Z E[X= ] (start by Z = 1F; where F ): F (cid:1) F 2 F A basic example. Let (cid:10) = [0;1), (cid:22) a Borel probability measure in [0,1) and denote by m 1 m n n = [ (cid:0)n ; n) : m = 1;2; ;2 D 2 2 (cid:1)(cid:1)(cid:1) the family of dyadic(cid:8)intervals of the generation n. D(cid:9)e(cid:12)ne n to be 1 F the (cid:27)-algebra generated by n. We say that n n=0 is the dyadic D fF g (cid:12)ltration. Then, if X : [0;1) R is a random variable, ! E[X= n] = Xd(cid:22) 1In: F (cid:0)In In2Dn(cid:18)Z (cid:19) X This shows that, in this particular setting, conditional expectation is just obtained by averaging. 1.2. Martingales and dyadic martingales. An increasingsequence of (cid:27)-algebras 0 1 2 in a probability space ((cid:10); ;(cid:22)) F (cid:26) F (cid:26) F (cid:26) (cid:1)(cid:1)(cid:1) (cid:26) G G is called a (cid:12)ltration. 1 Given a (cid:12)ltration n and a sequence of random variables (Sn)n=0 fF g we say that (Sn; n) is a martingale if, for all n: F (i) Sn is n-measurable, F (ii) E[Sn= n(cid:0)1] = Sn(cid:0)1: (1.1) F The de(cid:12)nitionof martingalenotonlyincludes the sequence (Sn), but also the underlying probability space ((cid:10); ;(cid:22)) and the (cid:12)ltration n . G fF g Nevertheless, if there is no risk of confusion, we will often omit both G and n and will only refer to a martingale (Sn) in ((cid:10);(cid:22)). fF g If we think of (Sn) as the fortune of a gambler at the instant n of a game, then condition (i) above expresses the trivial fact that the result of the game totally determines the state of the fortune at any instant, while condition (ii) says that the game is \fair" in the sense that the expected fortune after any trial must be the same that the fortune before the trial. If we replace = by (resp. ) in (1.1) then we say that (Sn) is a (cid:21) (cid:20) submartingale (resp. a supermartingale). Then submartingales (resp. supermartingales) are models of favorable (resp. unfavorable) games. DISCRETE MARTINGALES AND APPLICATIONS TO ANALYSIS 5 1 Given a martingale (Sn)n=0, it is often useful to introduce the in- n crements Xk = Sk Sk(cid:0)1, so Sn = S0 + k=1Xk. Now, (Sn) is a (cid:0) martingale i(cid:11): P (i’) Xk is k-measurable, F (ii’) E[Xk= k(cid:0)1] = 0 F for all k. Note that (ii’) is expressing now that the \expected gain" at any time of the game is 0. Basic properties of martingales. (1) If (Sn) is a martingale (resp. submartingale), then E[Sk= m] = Sm (resp. E[Sk= m] Sm ) for all k m: F F (cid:21) (cid:21) In particular, Snd(cid:22) = S0d(cid:22) resp. Snd(cid:22) S0d(cid:22) : (cid:10) (cid:10) (cid:10) (cid:21) (cid:10) Z Z (cid:18) Z Z (cid:19) n (2) If Sn = S0 + k=1Xk is a martingale, then the increments are orthogonal in the sense that P XkXmd(cid:22) = 0 (k = m): (cid:10) 6 Z In particular, if n m, (cid:21) 2 2 2 (Sn Sm) d(cid:22) = Snd(cid:22) Smd(cid:22) (cid:10) (cid:0) (cid:10) (cid:0) (cid:10) Z Z Z and, if S0 0, then (cid:17) n 2 2 Snd(cid:22) = Xk d(cid:22): (cid:10) (cid:10) Z k=1Z X (3) If (Sn) is a martingale and ’ : R R is convex, then ’(Sn) is ! a submartingale (use Jensen Inequality). Examples. 1 (1) If (Xk)k=1 is a sequence of independent random variables with zero mean, and n is the (cid:27)-algebra generated by X1; ;Xn F n (cid:1)(cid:1)(cid:1) ([Shi84]), then Sn = k=1Xk is a martingale. (2) (Dyadic martingales) P As in the basic example in Section 1.1, take (cid:10) = [0;1), (cid:22) a Borel probability in [0;1), n the family of dyadic intervals D of the generation n and n the (cid:27)-algebra generated by n. 1 F D Then (Sn)n=0 is a dyadic martingale in ([0;1);(cid:22)) if (Sn; n) is F a martingale, that is, for all n: (i) (Sn) is constant on any In n, 2 D (ii) Sn(cid:0)1 In(cid:0)1 = (cid:0)In(cid:0)1Snd(cid:22) for any In(cid:0)1 n(cid:0)1. j 2 D R 6 JOSE(cid:19) G. LLORENTE 1 2 i If In;In n are the dyadic \children" of In(cid:0)1 and Sn = i 2 D Sn In (i = 1;2), then (ii) above is equivalent to j 1 1 2 2 Sn(cid:0)1 In(cid:0)1 = Sn In +Sn In: (1.2) j j j Dyadic submartingales and supermartingales are de(cid:12)ned in the n same way. In terms of the increments, then Sn = S0+ k=1Xk is a dyadic martingale i(cid:11): P (i) Xk is constant on any Ik k, 2 D (ii) Xkd(cid:22) = 0 for any Ik(cid:0)1 k(cid:0)1: (1.3) (cid:0)Ik(cid:0)1 2 D Z In the special case that (cid:22) = Lebesgue measure in [0;1), then (1.2) and (1.3) reduce to 1 1 2 Sn(cid:0)1 In(cid:0)1 = (Sn +Sn) and (1.4) j 2 1 2 Xk +Xk = 0; (1.5) i i 2 where Xk = Sk Sk(cid:0)1. In particular, Xk is constant on any (cid:0) Ik(cid:0)1 k(cid:0)1, that is, k(cid:0)1-measurable. 2 D F (3) (Rademacher martingale) 1 Let (Xk)k=1 be the Rademacher system in [0;1), that is, Xk alternates +1 and 1 on the dyadic intervals of the generation (cid:0) k. For instance, X1 = 1[0;1=2) 1[1=2;1), X2 = 1[0;1=4)[[1=2;3=4) (cid:0) 1 (cid:0) 1[1=4;1=2)[[3=4;1) and so on. Then (Xk)k=1 are independent, iden- tically distributed random variables with zero mean and vari- 1 ance one. In particular, Sn = k=1Xk is a dyadic martingale in([0;1); ). We can interpret Sn as the fortune , after n trials, j(cid:1)j P 1 of a gambler who wins or loses one unit with probability at 2 each trial. Also, Sn can be thought as the position, at the n, of a random traveller starting from the origin and moving one 1 unit left or right with probability (usual random walk). 2 1 (4) Let f L [0;1). If In n, set Sn In = (cid:0)Inf(x)dx. Then, 2 2 D j (Sn)n is a dyadic martingale in ([0;1); ). It is easy to check j(cid:1)j R that Sn = E[f= n]. F (5) The followinggeneralizationofthe exampleabove willbe useful 1 later. Let (fk) L [0;1) and suppose that for any In Dn, (cid:26) 2 the limit SIn = lim fk(x)dx (1.6) k!1(cid:0)In Z exists and is (cid:12)nite. Then if we de(cid:12)ne Sn In = SIn, (Sn)n is also j a dyadic martingale in ([0;1); ). j(cid:1)j Remarks. (1) If (cid:22) = Lebesgue measure, (1.4) says that the value of Sn(cid:0)1 at any dyadic interval of the generation n 1 is the arithmetic (cid:0) DISCRETE MARTINGALES AND APPLICATIONS TO ANALYSIS 7 meanofSn atitsdyadicchildrenofthegenerationn. Then(1.4) must be seen as a discrete version of the mean value property of harmonic functions and it is, of course, responsible of many of the analogies between harmonic functions and martingales. Exactly in the same way, there is a natural heuristic identi(cid:12)ca- tion between subharmonic (resp. superharmonic) functions and submartingales (resp. supermartingales). (2) There is a graphic interpretation of dyadic martingales which is often helpful. Suppose that (Sn) is a dyadic martingale in ([0;1); ). Associated to (Sn) we construct a directed tree as j(cid:1)j follows: the original node is chosen to be the point (0;S0) in the plane and, once the point (n 1;Sn(cid:0)1) has been selected, 1 (cid:0) 2 1 2 then we add the points (n;Sn);(n;Sn) where Sn;Sn are as in (1.4). In this way we produce a directed dyadic tree, the original node being (0;S0), such that the vertical coordinates of the nodes at each level n inform us about the values of Sn (see Fig. 1). n 1.3. Quadratic characteristic. Let Sn = k=1Xk be a martingale in ((cid:10);(cid:22)). Assume w.l.o.g. that S0 = 0. Then P n 2 S n = E[Xk= k(cid:0)1] h i F k=1 X is called the Quadratic characteristic of (Sn). Observe that (i) S n is n(cid:0)1-measurable h i F (ii) n 2 2 S nd(cid:22) = Xk d(cid:22) = Snd(cid:22); (1.7) (cid:10)h i (cid:10) (cid:10) Z Z k=1 Z X where the de(cid:12)nition and the orthogonality property (2) in Section 1.2 have been used in (ii). Figure 1 8 JOSE(cid:19) G. LLORENTE Examples. (1) If (Sn) is dyadic martingale in ([0;1);(cid:22)), then 2 2 E[Xk= k(cid:0)1](x) = Xk d(cid:22) (1.8) F (cid:0)Ik(cid:0)1(x) Z n 1 2 (cid:22)(Ik) 1 2 (cid:22)(Ik) 2 2 S n(x) = (Xk) (x)+ (Xk) (x) ; (1.9) h i (cid:22)(Ik(cid:0)1(x)) (cid:22)(Ik(cid:0)1(x)) k=1(cid:20) (cid:21) X 1 2 where Ij(x) are the dyadic intervals containing x;Ik;Ik are f g i i the dyadic children of Ik(cid:0)1(x) and Xk = Xk Ik. In particular, j if (cid:22) is Lebesgue measure, then n 2 S n = Xk: (1.10) h i k=1 X 1 (2) If(Xk)k=1 are independent, identicallydistributedrandom vari- ables with zero mean, variance 1, k is the (cid:27)-algebra generated n F by X1;:::;Xk, and Sn = k=1akXk, with ak R, then 2 n P 2 S n = ak: h i k=1 X Remarks. (1) Sequences (Sn) such that Sn is n(cid:0)1-measurable for all n play F animportantroleinMartingaleTheory. Theyareusuallycalled predictable sequences. (2) Sometimes it is instructive to look at the quadratic character- istic as a sort of intrinsic random time associated to the mar- tingale. For instance, if Xk = 1 for all k, then S n = n. j j h i (3) The quadratic characteristic is a very important quantity re- lated to a martingale, in the sense that it determines most of its asymptotic behavior, as we will see in the next sections. It can be understood asadiscrete counterpart ofthe areafunction n in Harmonic Analysis. If u is de(cid:12)ned in R+ and t > 0, then the n(cid:0)1 t-truncated area function of u at x0 R is de(cid:12)ned (in the 2 simplest formulation) by: 2(cid:0)n 2 (Stu)(x0) = y u dx1:::dxn(cid:0)1dy; (cid:0)t(x0) jr j Z where n(cid:0)1 (cid:0)t(x0) = (x;y) : x R ; x x0 y;t y 1 : f 2 j (cid:0) j (cid:20) (cid:20) (cid:20) g Then, for small t, 2 2 (Stu)(x0) y u dx1:::dxn(cid:0)1dy: (cid:16) (cid:0)(cid:0)t(x0) jr j Z At this point, it should be mentioned that it is often useful to think in the hyperbolic gradient y u in the upper half-space jr j DISCRETE MARTINGALES AND APPLICATIONS TO ANALYSIS 9 as a continuous analogue of the increments Xk , where y and (cid:0)k j j k are related by y 2 . These observations explain why it is (cid:24) convenient to see the area function as a continuous counterpart of the quadratic characteristic. 1.4. Stopping times. Let n bea(cid:12)ltrationintheprobabilityspace fF g ((cid:10); ;(cid:22)). Then G (cid:28) : (cid:10) 0;1;2;::: ! f g[f1g is called an stopping time if (cid:28) = n n for all n. f g 2 F Example. Typical examples of stopping times are provided by bar- riers: let M R and (Sn) be any sequence of random variables such 2 that Sn is n-measurable for all n. De(cid:12)ne (cid:28) by: F (cid:28)(x) = n S0(x) M;:::;Sn(cid:0)1(x) M;Sn(x) > M , (cid:20) (cid:20) (cid:28)(x) = supSn(x) M: 1 , n (cid:20) Then (cid:28) is a stopping time. n Suppose now that Sn = k=1Xk is any random sequence, with Xk being k-measurable for all k, and (cid:28) is an stopping time. Then the F (cid:28) P stopped sequence (Sn) is de(cid:12)ned by (cid:28) Sn(x) = Sminf(cid:28);ng(x): Since (cid:28) (cid:28) Sk Sk(cid:0)1 = (Sk Sk(cid:0)1)1f(cid:28)(cid:21)kg; (cid:0) (cid:0) we get the representation n (cid:28) Sn = 1f(cid:28)(cid:21)kgXk: (1.11) k=1 X The following proposition says that the martingale structure is pre- served by stopping times. Proposition 1.1. If (Sn) is a martingale and (cid:28) is any stopping time, (cid:28) then the stopped sequence (Sn) is also a martingale. Proof. Note that (cid:28) k is k- measurable. Combine (1.11) with this observation. f (cid:21) g F (cid:3) Remark. It isinstructive to comparestopping timetechniques with the so called \localization techniques" in Function Theory. Here is an example: suppose that u is harmonic in the unit disc D, M R and 2 G is a non-empty component of the set u < M . Then G is simply f g connected by the MaximumPrinciple and if’ is a conformal map from D ontoG, thenv = u ’isharmonicinD andsupv = M. Thus, if(Sn) Æ is a martingale, v can be seen as a continuous version of the stopped (cid:28) martingale (Sn), where (cid:28) is the stopping time corresponding to the upper barrier M, as in the example above. 10 JOSE(cid:19) G. LLORENTE 1.5. The Basic Convergence Theorem. Theorem 1.1 (Doob). Let 1 p < . If (Sn) is a martingale in (cid:20) 1 ((cid:10);(cid:22)) such that p sup Sn d(cid:22) < ; (1.12) n (cid:10)j j 1 Z then limnSn(x) exists and is (cid:12)nite for (cid:22) a:e: x (cid:10). (cid:0) 2 Remarks. (1) By H(cid:127)older’s inequality, the result follows in the range p p0 (cid:21) provided it is proved for p = p0. This shows that it is enough to take p = 1 in Theorem 1.1. Nevertheless, the proof given below exploits the orthogonality and works for p = 2 (and, therefore, for p 2). In the general case p = 1, most of the proofs use (cid:21) the \upcrossing inequalities" technique, a combinatorial trick controlling the oscillation ([Shi84], [Sto84]). (2) Theorem 1.1 still holds if \martingale" is replaced by \sub- martingale". (3) The class of martingales satisfying (1.12) can be seen as the p discrete analogue of the harmonic Hardy class Hu, consisting of all harmonic functions in the unit ball B such that p sup u(r(cid:24)) d(cid:24) < : (1.13) 0<r<1 @B j j 1 Z Therefore, Theorem 1.1 corresponds to the fact that each func- p tion in Hu has (cid:12)nite radial limits a.e. on @B. n Lemma 1.1 (Kolmogorov inequality). Let Sn = k=1Xk be a mar- tingale in ((cid:10);(cid:22)), with S0 = 0. Then, for any " > 0, P 1 2 (cid:22) 1m(cid:20)ka(cid:20)xnjSkj (cid:21) " (cid:20) "2 (cid:10)Snd(cid:22): (cid:26) (cid:27) Z Proof. De(cid:12)ne the stopping time (cid:28) by: (cid:28)(x) = n S1(x) ";:::; Sn(cid:0)1(x) "; Sn(x) > " , j j (cid:20) j j (cid:20) j j (cid:28)(x) = sup Sn(x) ": 1 , n j j (cid:20) Observe that max Sk " = (cid:28) n : 1(cid:20)k(cid:20)nj j (cid:21) f (cid:20) g (cid:28) (cid:26) (cid:27) Let (Sn) be the stopped martingale. Then, by (1.11) and orthogonality (Property (2), Section 1.2): n n (cid:28) 2 2 2 2 (Sn) d(cid:22) = 1f(cid:28)(cid:21)kgXk d(cid:22) Xk d(cid:22) = Snd(cid:22): (cid:10) (cid:10) (cid:20) (cid:10) (cid:10) Z Z k=1 Z k=1 Z X X Thus, 2 (cid:28) 2 2 " (cid:22) (cid:28) n (Sn) d(cid:22) Snd(cid:22); f (cid:20) g (cid:20) f(cid:28)=ng (cid:20) (cid:10) Z Z

Description:
The use of probabilistic techniques in Analysis has experienced a re- markable success erence for the probabilistic part, and [Sto84] is also a good reference.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.