ebook img

Conditioned Brownian trees PDF

0.38 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Conditioned Brownian trees

CONDITIONED BROWNIAN TREES 5 Jean-Franc¸ois Le Gall and Mathilde Weill 0 0 D.M.A., Ecole normale sup´erieure, 45 rue d’Ulm, 75005 Paris, France 2 n a February 1, 2008 J 5 ] Abstract R P We consider a Brownian tree consisting of a collection of one-dimensional Brownian . h paths started from the origin, whose genealogical structure is given by the Continuum t Random Tree (CRT). This Brownian tree may be generated from the Brownian snake a m driven by a normalized Brownian excursion, and thus yields a convenient representation [ of the so-called Integrated Super-Brownian Excursion (ISE), which can be viewed as the uniform probability measure on the tree of paths. We discuss different approaches 1 v that lead to the definition of the Brownian tree conditioned to stay on the positive half- 6 line. We also establish a Verwaat-like theorem showing that this conditioned Brownian 6 tree can be obtained by re-rooting the unconditioned one at the vertex corresponding 0 1 to the minimal spatial position. In terms of ISE, this theorem yields the following fact: 0 ConditioningISEtoputnomasson] , ε[andlettingεgoto0isequivalenttoshifting 5 −∞ − 0 the unconditioned ISE to the right so that the left-most point of its support becomes / the origin. We derive a number of explicit estimates and formulas for our conditioned h t Brownian trees. In particular, the probability that ISE puts no mass on ] , ε[ is a − ∞ − m shown to behave like 2ε4/21 when ε goes to 0. Finally, for the conditioned Brownian tree with a fixed height h, we obtain a decomposition involving a spine whose distribution is : v absolutely continuous with respect to that of a nine-dimensional Bessel process on the i X time interval [0,h], and Poisson processes of subtrees originating from this spine. r a 1 Introduction In this work, we define and study a continuous tree of one-dimensional Brownian paths started from the origin, which is conditioned to remain in the positive half-line. An important motiva- tion for introducing this object comes from its relation with analogous discrete models which are discussed in several recent papers. In order to present our main results, let us briefly describe a construction of unconditioned Brownian trees. We start from a positive Brownian excursion conditioned to have duration 1 (a normalized Brownian excursion in short), which is denoted by (e(s),0 s 1). This random ≤ ≤ function can be viewed as coding a continuous tree via the following simple prescriptions. For every s,s [0,1], we set ′ ∈ m (s,s) := inf e(r). e ′ s s′ r s s′ ∧ ≤ ≤ ∨ 1 We then define an equivalence relation on [0,1] by setting s s if and only if e(s) = e(s) = ′ ′ ∼ m (s,s). Finally we put e ′ d (s,s) = e(s)+e(s) 2m (s,s) e ′ ′ e ′ − and note that d (s,s) only depends on the equivalence classes of s and s. Then the quotient e ′ ′ space T := [0,1]/ equipped with the metric d is a compact R-tree (see e.g. Section 2 of e e ∼ [13]). In other words, it is a compact metric space such that for any two points σ and σ there ′ is a unique arc with endpoints σ and σ and furthermore this arc is isometric to a compact ′ interval of the real line. We view T as a rooted R-tree, whose root ρ is the equivalence class e of 0. For every σ T , the ancestral line of σ is the line segment joining ρ to σ. This line e ∈ segment is denoted by [[ρ,σ]]. We write s˙ for the equivalence class of s, which is a vertex in T e at generation e(s) = d (0,s). e Up to unimportant scaling constants, T is the Continuum Random Tree (CRT) introduced e by Aldous [3]. The preceding presentation is indeed a reformulation of Corollary 22 in [5], which was proved via a discrete approximation (a more direct approach was given in [21]). As Aldous [5] has shown, the CRT is the scaling limit of critical Galton-Watson trees conditioned to have a large fixed progeny (see [12] and [13] for recent generalizations of Aldous’ result). The fact that Brownian excursions can be used to model continuous genealogies had been used before, in particular in the Brownian snake approach to superprocesses (see [20]). We can now combine the branching structure of the CRT with independent spatial motions. We restrict ourselves to spatial displacements given by linear Brownian motions, which is the case of interest in this work. Conditionally given e, we introduce a centered Gaussian process (V ,σ T ) with covariance σ e ∈ cov(V ,V ) = m (s,s) , s,s [0,1]. s˙ s˙′ e ′ ′ ∈ This definition should become clear if we observe that m (s,s) is the generation of the most e ′ recent common ancestor to s˙ and s˙ in the tree T . It is easy to verify that the process ′ e (V ,σ T ) has a continuous modification. The random measure on R defined by σ e ∈ Z 1 ,ϕ = ϕ(V )ds s˙ hZ i Z 0 is then the one-dimensional Integrated Super-Brownian Excursion (ISE). Note that ISE in higher dimensions, and related Brownian trees, have appeared recently in various asymptotic results for statistical mechanics models (see e.g. [11],[15],[16]). The support, or range, of ISE is := V : σ T . σ e R { ∈ } For our purposes, it is also convenient to reinterpret the preceding notions in terms of the Brownian snake. The Brownian snake (W ,0 s 1) driven by the normalized excursion e is s ≤ ≤ obtained as follows (see subsection 2.1 for a more detailed presentation). For every s [0,1], ∈ W = (W (t),0 t e(s)) is the finite path which gives the spatial positions along the s s ≤ ≤ ancestral line of s˙: W (t) = V if σ is the vertex at distance t from the root on the segment s σ [[ρ,s˙]]. Note that W only depends on the equivalent class s˙. We view W as a random element s s of the space of finite paths. W 2 Our first goal is to give a precise definition of the Brownian tree (V ,σ T ) conditioned σ e ∈ to remain positive. Equivalently this amounts to conditioning ISE to put no mass on the negative half-line. Our first theorem gives a precise meaning to this conditioning in terms of the Brownian snake. We denote by N(1) the distribution of (W ,0 s 1) on the canonical 0 s ≤ ≤ space C([0,1], ) of continuous functions from [0,1] into , and we abuse notation by still W W writing (W ,0 s 1) for the canonical process on this space. The range is then defined s ≤ ≤ R under N(1) by 0 = W : 0 s 1 s R { ≤ ≤ } where W denotes the endpoint of the pathcW . s s c Theorem 1.1 We have 2 limε 4N(1)( ] ε, [) = . ε 0 − 0 R ⊂ − ∞ 21 ↓ There exists a probability measure on C([0,1], ), which is denoted by N(1), such that W 0 limN(1)( ] ε, [) = N(1), ε 0 0 · | R ⊂ − ∞ 0 ↓ in the sense of weak convergence in the space of probability measures on C([0,1], ). W Our second theorem gives an explicit representation of the conditioned measures N(1), which 0 is analogous to a famous theorem of Verwaat [28] relating the normalized Brownian excursion to the Brownian bridge. To state this result, we need the notion of re-rooting. For s [0,1], ∈ we write T[s] for the “same” tree T but with root s˙ instead of ρ = 0˙. We then shift the spatial e e positions by setting V[s] = V V for every σ T , in such a way that the spatial position of σ σ s˙ e the new root is still the origin.−(Notice that bo∈th T[s] and V[s] only depend on s˙, and we could e as well define T[σ] and V[σ] for σ T .) Finally, the re-rooted snake W[s] = (W[s],0 r 1) is e e r ∈ ≤ ≤ [s] defined analogously as before: For every r [0,1], W is the path giving the spatial positions r ∈ [s] V along the ancestral line (in the re-rooted tree) of the vertex s+r mod. 1. σ Theorem 1.2 Let s be the unique time of the minimum of W on [0,1]. The probability measure N(1) is the la∗w under N(1) of the re-rooted snake W[s∗]. 0 0 c If we want to define one-dimensional ISE conditioned to put no mass on the negative half- line, the most natural way is to condition it to put no mass on ] , ε[ and then to let − ∞ − ε go to 0. As a consequence of the previous two theorems, this is equivalent to shifting the unconditioned ISE to the right, so that the left-most point of its support becomes the origin. Both Theorem 1.1 and Theorem 1.2 could be presented in a different and perhaps more elegant manner by using the formalism of spatial trees as in Section 5 of [13]. In this formalism, a spatial tree is a pair (T,U) where T is a compact rooted R-tree (in fact an equivalent class of such objects modulo root-preserving isometries) and U is a continuous mapping from T into Rd. Then the second assertion of Theorem 1.1 can be rephrased by saying that the conditional distribution of the spatial tree (T ,V) knowing that ] ε, [ has a limit when ε goes to 0, e R ⊂ − ∞ 3 and Theorem 1.2 says that this limit is the distribution of (T[σ∗],V[σ∗]) where σ is the unique e ∗ vertex minimizing V. We have chosen the above presentation because the Brownian snake plays a fundamental role in our proofs and also because the resulting statements are stronger than the ones in terms of spatial trees. Letusdiscusstherelationshipoftheabovetheoremswithpreviousresults. Thefirstassertion of Theorem 1.1 is closely related to some estimates of Abraham and Werner [1]. In particular, Abraham and Werner proved that the probability for a Brownian snake driven by a Brownian excursion of height 1 not to hit the set ] , ε[ behaves like a constant times ε4 (see Section 4 −∞ − below). The d-dimensional Brownian snake conditioned not to exit a domain D was studied by Abraham and Serlet [2], who observed that this conditioning gives rise to a particular instance of the Brownian snake with drift. The setting in [2] is different from the present work, in that the initial point of the snake lies inside the domain, and not at its boundary as here. We also mention the paper [18] by Jansons and Rogers, who establish a decomposition at the minimum for a Brownian tree where branchings occur only at discrete times. An important motivation for the present work came from several recent papers that discuss asymptotics for planar maps. A key result due to Schaeffer (see [9]) establishes a bijection between rooted planar quadrangulations and certain discrete trees called well-labelled trees. Roughly, a well-labelled tree consists of a (discrete) plane tree whose vertices are given labels which are positive integers, with the constraints that the label of the root is 1 and the labels of two neighboring vertices can differ by at most 1. Our conditioned Brownian snake should then be viewed as a continuous model for well-labelled trees. This idea was exploited in [9] and especially in Marckert and Mokkadem [26], where the re-rooted snake W[s∗] appears in the description of the Brownian map, which is the continuous object describing scaling limits of planar quadrangulations. In contrast with the present work, the re-rooted snake W[s∗] is not interpreted in [26] as a conditioned object, but rather as a scaling limit of re-rooted discrete snakes. Closelyrelatedmodelsofdiscretelabelledtreesarealsoofinterestintheoreticalphysics: See in particular [6] and [7]. Motivated by [9] and [26], we prove in [24] that our conditioned Brownian tree is the scaling limit of discrete spatial trees conditioned to remain positive. To be specific, we consider a Galton-Watson tree whose offspring distribution is critical and has (small) exponential moments, and we condition this tree to have exactly n vertices (in the special case of the geometric distribution, this gives rise to a tree that is uniformly distributed over the set of plane trees with n vertices). This branching structure is combined with a spatial displacement which is a symmetric random walk with bounded jump size on Z. Assuming that the root is at the origin of Z, the spatial tree is then conditioned to remain on the positive side. According to the main theorem of [24], the scaling limit of this conditioned discrete tree when n leads to the measure N(1) discussed above. The convergence here, and the precise → ∞ 0 form of the scaling transformation, are as in Theorem 2 of [17], which discusses scaling limits for unconditioned discrete snakes. Let us now describe the other contributions of this paper. Although the preceding theorems have been stated for the measure N(1), a more fundamental object is the excursion measure 0 N of the Brownian snake (see e.g. [23]). Roughly speaking, N is obtained by the same 0 0 construction as above, but instead of considering a normalized Brownian excursion, we now let e be distributed according to the (infinite) Itˆo measure of Brownian excursions. If σ(e) denotes the duration of excursion e, we have N(1) = N ( σ = 1). It turns out that many 0 0 · | 4 calculations are more tractable under the infinite measure N than under N(1). For this reason, 0 0 both Theorems 1.1 and Theorem 1.2 are proved in Section 3 as consequences of Theorem 3.1, which deals with N . Motivated by Theorem 3.1 we introduce another infinite measure denoted 0 by N , which should be interpreted as N conditioned on the event [0, [ , even though 0 0 {R ⊂ ∞ } theconditioning requires some careas wearedealing withinfinite measures. Inthe sameway as for unconditioned measures, we have N(1) = N ( σ = 1). Another motivation for considering 0 0 · | the measure N comes from connections with superprocesses: Analogously to Chapter IV of 0 [23] in the unconditioned case, N could be used to define and to analyse a one-dimensional 0 super-Brownian motion started from the Dirac measure δ and conditioned never to charge the 0 negative half-line. In Section 4, we present a different approach that leads to the same limiting measures. If H(e) stands for the height of excursion e, we consider for every h > 0 the measure Nh := N ( 0 0 · | H = h). In the above construction this amounts to replacing the normalized excursion e by a Brownian excursion with height h. By using a famous decomposition theorem of Williams, we can then analyse the behavior of the measure Nh conditioned on the event that the range does 0 not intersect ] , ε[ and show that it has a limit denoted by Nh when ε 0. The method −∞ − 0 → also provides information about the Brownian tree under Nh: This Brownian tree consists of a 0 spine whose distribution is absolutely continuous with respect to that of the nine-dimensional Bessel process, and as usual a Poisson collection of subtrees originating from the spine, which are Brownian snake excursions conditioned not to hit the negative half-line. The connection with the measures N(1) and N is made by proving that Nh = N ( H = h). Several arguments 0 0 0 0 · | in this section have been inspired by Abraham and Werner’s paper [1]. It should also be noted that a discrete version of the nine-dimensional Bessel process already appears in the Chassaing-Durhuus paper [8]. At the end of Section 4, we also discuss the limiting behavior of the measures Nh as h . 0 → ∞ This leads toa probabilitymeasure N∞ that shouldbeviewed asthe lawofaninfinite Brownian 0 snake excursion conditioned to stay positive. We again get a description of the Brownian tree coded by N∞ in terms of a spine and conditioned Brownian snake excursions originating from 0 this spine. Moreover, the description is simpler in the sense that the spine is exactly distributed as a nine-dimensional Bessel process started at the origin. Section 5 gives an explicit formula for the finite-dimensional marginal distributions of the Brownian tree under N , that is for 0 N ds ...ds F(W ,...,W ) 0(cid:16)Z]0,σ[p 1 p s1 sp (cid:17) where p 1 is an integer and F is a symmetric nonnegative measurable function on p. In a ≥ W way similar to the corresponding result for the unconditioned Brownian snake (see (1) below), this formula involves combining the branching structure of certain discrete trees with spatial displacements. Here however because of the conditioning, the spatial displacements turn out to be given by nine-dimensional Bessel processes rather than linear Brownian motions. In the same way as the finite-dimensional marginal distributions of the CRT can be derived from the analogous formula under the Itˆo measure (see Chapter III of [23]), one might hope to derive the expression of the finite-dimensional marginals under N(1) from the case of N . This idea 0 0 5 apparently leads to untractable calculations, but we still expect Theorem 5.1 to have useful applications in future work about conditioned trees. Basic facts about the Brownian snake are recalled in Section 2, which also establishes a few important preliminary results, some of which are of independent interest. In particular, we state and prove a general version of the invariance property of N under re-rooting (Theorem 0 2.3). This result is clearly related to the invariance of the CRT under uniform re-rooting, which was observed by Aldous [4] (and generalized to L´evy trees in Proposition 4.8 of [13]). See also [9] for similar ideas in a discrete setting, and especially Proposition 13 of [26] which gives a closely related statement. 2 Preliminaries In this section, we recall the basic facts about the Brownian snake that we will use later, and we also establish a few important preliminary results. We refer to [23] for a more detailed presentation of the Brownian snake and its connections with partial differential equations. In thefirst foursubsections below, we dealwith thed-dimensional Brownian snake since theproofs are not more difficult in that case, and the results may have other applications. 2.1 The Brownian snake The (d-dimensional) Brownian snake is a Markov process taking values in the space of W finite paths in Rd. Here a finite path is simply a continuous mapping w : [0,ζ] Rd, where −→ ζ = ζ is a nonnegative real number called the lifetime of w. The set is a Polish space (w) W when equipped with the distance d(w,w) = ζ ζ +sup w(t ζ ) w(t ζ ) . ′ (w) (w′) (w) ′ (w′) | − | | ∧ − ∧ | t 0 ≥ The endpoint (or tip) of the path w is denoted by w. The range of w is denoted by w[0,ζ ]. (w) In this work, it will be convenient to use the canonical space Ω := C(R , ) of continuous b + W functions from R into , which is equipped with the topology of uniform convergence on + W every compact subset of R . The canonical process on Ω is then denoted by + W (ω) = ω(s) , ω Ω , s ∈ and we write ζ = ζ for the lifetime of W . s (Ws) s Let w . The law of the Brownian snake started from w is the probability measure P w ∈ W on Ω which can be characterized as follows. First, the process (ζ ) is under P a reflected s s 0 w ≥ Brownian motion in [0, [ started from ζ . Secondly, the conditional distribution of (W ) (w) s s 0 knowing (ζ ) , which ∞is denoted by Θζ, is characterized by the following properties: ≥ s s 0 w ≥ (i) W = w, Θζ a.s. 0 w (ii) The process (W ) is time-inhomogeneous Markov under Θζ. Moreover, if 0 s s, s s≥0 w ≤ ≤ ′ 6 W (t) = W (t) for every t m(s,s) := inf ζ , Θζ a.s. • s′ s ≤ ′ [s,s′] r w (W (m(s,s) + t) W (m(s,s))) is independent of W and distributed • ass′a d-dim′ension−al Bsr′ownian′mo0t≤iot≤nζss′t−amr(tse,sd′)at 0 under Θζ. s w Informally, the value W of the Brownian snake at time s is a random path with a random s lifetime ζ evolving like reflecting Brownian motion in [0, [. When ζ decreases, the path is s s ∞ erased from its tip, and when ζ increases, the path is extended by adding “little pieces” of s Brownian paths at its tip. Excursion measures play a fundamental role throughout this work. We denote by n(de) the Itˆo measure of positive Brownian excursions. This is a σ-finite measure on the space C(R ,R ) + + of continuous functions from R into R . We write + + σ(e) = inf s > 0 : e(s) = 0 { } for the duration of excursion e. For s > 0, n will denote the conditioned measure n( σ = s). (s) · | Our normalization of the excursion measure is fixed by the relation ds ∞ n = n . (s) Z 2√2πs3 0 If x Rd, the excursion measure N of the Brownian snake from x is then defined by x ∈ N = n(de) Θe x x Z C(R+,R+) where x denotes the trivial element of with lifetime 0 and initial point x. Alternatively, we W can view N as the excursion measure of the Brownian snake from the regular point x. With x a slight abuse of notation we will also write σ(ω) = inf s > 0 : ζ (ω) = 0 for ω Ω. We can s { } ∈ then consider the conditioned measures N(s) = N ( σ = s) = n (de) Θe. x x · | Z (s) x C(R+,R+) Note that in contrast to the introduction we now view N(s) as a measure on Ω rather than on x C([0,s], ). The range = (ω) is defined by = W : s 0 . s W R R R { ≥ } c Lemma 2.1 Suppose that d = 1 and let x > 0. (i) We have 3 N ( ] ,0] = ∅) = . x R∩ −∞ 6 2x2 (ii) For every λ > 0, λ N 1 1 e λσ = 3(coth(21/4xλ1/4))2 2 x(cid:16) − {R∩]−∞,0]=∅} − (cid:17) r2(cid:16) − (cid:17) where coth(y) = cosh(y)/sinh(y). Proof: (i) According to Section VI.1 of [23], the function u(x) = N ( ] ,0] = ∅) solves x R∩ −∞ 6 u = 4u2 in ]0, [, with boundary condition u(0+) = + . The desired result follows. ′′ ∞ ∞ (cid:3) (ii) See Lemma 7 in [10]. 7 2.2 Finite-dimensional marginal distributions In this subsection we state a result giving information about the joint distribution of the values of the Brownian snake at a finite number of times and its range. In order to state this result, we need some formalism for trees. We first introduce the set of labels ∞ = 1,2 n U { } n[=0 wherebyconvention 1,2 0 = ∅ . Anelement of isthusasequenceu = u1...un ofelements { } { } U of 1,2 , and we set u = n, so that u represents the “generation” ofu. In particular, ∅ = 0. { } | | | | | | The mapping π : ∅ is defined by π(u1...un) = u1...un 1 (π(u) is the “father” of − U\{ } −→ U u). In particular, if k = u , we have πk(u) = ∅. | | A binary (plane) tree is a finite subset of such that: T U (i) ∅ . ∈ T (ii) u ∅ π(u) . ∈ T\{ } ⇒ ∈ T (iii) For every u , either u1 and u2 , or u1 / and u2 / (u is called a leaf in ∈ T ∈ U ∈ U ∈ U ∈ U the second case). We denote by A the set of all binary trees. A marked tree is then a pair ( ,(h ) ) where u u A and h 0 for every u . We denote by T the space of all marTked tree∈sT. In this u T ∈ ≥ ∈ T work it will be convenient to view marked trees as R-trees in the sense of [13] or [14] (see also Section 1 above). This can be achieved through the following explicit construction. Let θ = ( ,(h ) ) be a marked tree and let R be the vector space of all mappings from into u u T R. WTrite (ε ,∈uT ) for the canonical basis of R . Then consider the mapping T u T ∈ T p : u [0,h ] R θ u T { }× −→ [ u ∈T defined by u | | p (u,ℓ) = h ε +ℓε . θ πk(u) πk(u) u X k=1 As a set, the real tree associated with θ is the range θ of p . Note that this is a connected union θ of line segments in R . It is equipped with the distance d such that d (a,b) is the length of T e θ θ the shortest path in θ going from a to b. By definition, the range of this path is the segment between a and b and is denoted by [[a,b]]. Finally, we will write for (one-dimensional) e Lθ Lebesgue measure on θ. By definition, leaves of θ are points of the form p (u,h ) where u is leaf of θ. Points of the e θ u form p (u,h ) when u is not a leaf are called nodes of θ. We write L(θ) for the set of leaves of θ u e θ, and I(θ) for the set of its nodes. The root of θ is just the point 0 = p (∅,0). e θ e We will consider Brownian motion indexed byeθ, with initial point x Rd. Formally, we may ∈ consider, under the probability measure Qθ, a collection (ξu) of independent d-dimensional x e u∈T 8 Brownian motions all started at 0 except ξ∅ which starts at x, and define a continuous process (V ,a θ) by setting a ∈ u e V = | | ξπk(u)(h )+ξu(ℓ), pθ(u,ℓ) πk(u) X k=1 for every u and ℓ [0,h ]. Finally, with every leaf a of θ we associate a stopped path u ∈ T ∈ w(a) with lifetime d (0,a): For every t [0,d (0,a)], w(a)(t) = V where r(a,t) is the unique θ ∈ θ re(a,t) element of [[0,a]] such that d (0,r(a,t)) = t. θ For every integer p 1, denote by A the set of all binary trees with p leaves, and by T p p ≥ the corresponding set of marked trees. The uniform measure Λ on T is defined by p p Λ (dθ)F(θ) = dh F(T,(h ) ). p v v v ZTp XApZ vY ∈T T∈ ∈T With this notation, Proposition IV.2 of [23] states that, for every integer p 1 and every ≥ symmetric nonnegative measurable function F on p, W N ds ...ds F(W ,...,W ) = 2p 1p! Λ (dθ)Qθ F((w(a)) ) . (1) x(cid:16)Z]0,σ[p 1 p s1 sp (cid:17) − Z p xh a∈L(θ) i We will need a stronger result concerning the case where the function F also depends on the range of the Brownian snake. To state this result, denote by the space of all compact R K subsets of Rd, which is equipped with the Hausdorff metric and the associated Borel σ-field. Suppose that under the probability measure Qθ (for each choice of θ in T), in addition to the x process (V ,a θ), we are also given an independent Poisson point measure on θ Ω, denoted a ∈ × by e e δ , (ai,ωi) X i I ∈ with intensity 4 (da) N (dω). θ 0 L ⊗ Theorem 2.2 For every nonnegative measurable function F on p R , which is sym- + W ×K× metric in the first p variables, we have N ds ...ds F(W ,...,W , ,σ) x(cid:16)Z]0,σ[p 1 p s1 sp R (cid:17) = 2p 1p! Λ (dθ)Qθ F (w(a)) ,cl (V + (ω )) , σ(ω ) , − Z p xh (cid:16) a∈L(θ) (cid:16)[ ai R i (cid:17) X i (cid:17)i i I i I ∈ ∈ where cl(A) denotes the closure of the set A. Remark. It is immediate to see that cl (V + (ω )) = w(a)[0,ζ ] (V + (ω )) , Qθ a.e. ai R i (w(a)) ∪ ai R i x (cid:16)[ (cid:17) (cid:16) [ (cid:17) (cid:16)[ (cid:17) i I a L(θ) i I ∈ ∈ ∈ 9 Proof: Consider first the case p = 1. Let F be a nonnegative measurable function on , 1 W and let F and F be two nonnegative measurable functions on Ω. By applying the Markov 2 3 property under N at time s, then using the time-reversal invariance of N (which is easy from x x the analogous property for the Itˆo measure n(de)), and finally using the Markov property at time s once again, we get σ N dsF (W )F (W ) F (W ) x 1 s 2 (s r)+ r 0 3 s+r r 0 (cid:16)Z0 (cid:16) − ≥ (cid:17) (cid:16) ≥ (cid:17)(cid:17) σ = N dsF (W )F (W ) E F (W ) x(cid:16)Z0 1 s 2(cid:16) (s−r)+ r≥0(cid:17) Wsh 3(cid:16) r∧σ r≥0(cid:17)i(cid:17) σ = N dsF (W )F (W ) E F (W ) x(cid:16)Z0 1 s 2(cid:16) s+r r≥0(cid:17) Wsh 3(cid:16) r∧σ r≥0(cid:17)i(cid:17) σ = N dsF (W )E F (W ) E F (W ) . x(cid:16)Z0 1 s Wsh 2(cid:16) r∧σ r≥0(cid:17)i Wsh 3(cid:16) r∧σ r≥0(cid:17)i(cid:17) We then use the case p = 1 of (1) to see that the last quantity is equal to ∞dt Pt(dw)F (w)E F (W ) E F (W ) , Z0 Z x 1 wh 2(cid:16) r∧σ r≥0(cid:17)i wh 3(cid:16) r∧σ r≥0(cid:17)i where Pt denotes the law of Brownian motion started at x and stopped at time t (this law is x viewed as a probability measure on ). Now if we specialize to the case where F is a function 2 W of the form F (ω) = G ( W (ω) : s 0 ,σ), an immediate application of Lemma V.2 in [23] 2 2 s { ≥ } shows that c E F (W ) = E G cl (w(t )+ (ω )) , σ(ω ) , w 2 r σ r 0 2 j j j h (cid:16) ∧ ≥ (cid:17)i h (cid:16) (cid:16)[ R (cid:17) X (cid:17)i j J j J ∈ ∈ where δ is a Poisson point measure on [0,ζ ] Ω with intensity 2 dtN (dω). Ap- j J (tj,ωj) (w) × 0 plyingPthe∈same observation to F3, we easily get the case p = 1 of the theorem. The general case can be derived along similar lines by using Theorem 3 in [21]. Roughly speaking, the case p = 1 amounts to combining Bismut’s decomposition of the Brownian excursion (Lemma 1 in [21]) with the spatial displacements of the Brownian snake. For general p, the second assertion of Theorem 3 in [21] provides the analogue of Bismut’s decomposition, whichwhencombinedwithspatialdisplacements leadstothestatement ofTheorem2.2. Details (cid:3) are left to the reader. 2.3 The re-rooting theorem In this subsection, we state and prove an important invariance property of the Brownian snake underN ,whichplaysamajorroleinSection3below. Wefirstneedtointroducesomenotation. 0 For every s,r [0,σ], we set ∈ s+r if s+r σ , s r = ≤ ⊕ (cid:26) s+r σ if s+r > σ . − 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.