The slow regime of randomly biased walks on trees 5 1 0 by 2 p Yueyun Hu1 and Zhan Shi2 e S Universit´e Paris XIII & Universit´e Paris VI 6 2 ] R Summary. We are interested in the randomly biased random walk on P the supercritical Galton–Watson tree. Our attention is focused on a slow . h regime when the biased random walk (Xn) is null recurrent, making a t maximal displacement of order of magnitude (logn)3 in the first n steps. a m We studythelocalization problemof X andprovethat thequenched law n [ of X can be approximated by a certain invariant probability depending n on n and the random environment. As a consequence, we establish that 3 v upon the survival of the system, |Xn| converges in law to some non- (logn)2 0 degenerate limit on (0,∞) whose law is explicitly computed. 0 7 7 Keywords. Biased random walk on the Galton–Watson tree, branching 0 random walk, slow movement, local time, convergence in law. . 1 0 2010 Mathematics Subject Classification. 60J80, 60G50, 60K37. 5 1 : v i X r 1 Introduction a Let T be a supercritical Galton–Watson tree rooted at ∅, so it survives with positive probability. For any pair of vertices x and y of T, we say x ∼ y if x is either a child, or the parent, of y. Let ω := (ω(x), x ∈ T) be a sequence of vectors; for each vertex x ∈ T, ω(x) := (ω(x, y), y ∈ T) is such that ω(x, y) ≥ 0 for all y ∈ T and that ω(x, y) = 1. y∈T We assume that for each pair of vertices x and y, ω(x, y) > 0 if and only if y ∼ x. P PartlysupportedbyANRprojectMEMEMO2(2010-BLAN-0125). 1LAGA,Universit´eParisXIII,99avenue J-BCl´ement,F-93430Villetaneuse,France,[email protected] 2LPMA,Universit´eParisVI,4placeJussieu,F-75252ParisCedex05,France,[email protected] 1 For given ω, let (X , n ≥ 0) be a random walk on T with transition probabilities ω, n i.e., a T-valued Markov chain, started at X = ∅, such that 0 P {X = y|X = x} = ω(x, y). ω n+1 n For any vertex x ∈ T\{∅}, let ←x be its parent, and let (x(1),··· ,x(N(x))) be its children, where N(x) ≥ 0 is the number of children of x. Define A(x) := (A (x), 1 ≤ i ≤ N(x)) by i ω(x, x(i)) (1.1) A (x) := , 1 ≤ i ≤ N(x). i ← ω(x, x) A special example is when A (x) = λ for all x ∈ T\{∅} and all 1 ≤ i ≤ N(x), where λ is a i finite and positive constant: the random walk (X ) is then the λ-biased random walk on T n introduced and studied in depth by Lyons [26]–[27], Lyons, Pemantle and Peres [31]–[32]. In particular, if A (x) = 1, ∀x, ∀i, we get the simple random walk on T. i It is known that when the transition probabilities are random — the resulting random walk (X ) is then a random walk in random environment —, the walk possesses a regime n of slow movement. We are interested in this slow movement in this paper. In the language of Neveu [36], (T, ω) is a marked tree. Note that A(x), x ∈ T\{∅} depends entirely on the marked tree. We assume, from now on, that A(x), x ∈ T\{∅}, are i.i.d., and write A = (A ,··· ,A ) for a generic random vector having the law of A(x) (for 1 N any x ∈ T\{∅}). We mention that the dimension N ≥ 0 of A is random, and is governed by the law of reproduction of T. We use P to denote the probability with respect to the environment, and P := P⊗P the annealed probability, i.e., P(·) := P (·)P(dω). ω ω Throughout the paper, we assume R N N (1.2) E A = 1, E A logA = 0. i i i (cid:16)Xi=1 (cid:17) (cid:16)Xi=1 (cid:17) In the language of branching random walks (see Section 2), (1.2) refers to the “boundary case”; in this case, the biased walks produce some unusual phenomena that have still been beyond good understanding. We also assume the following integrability condition: there exists δ > 0 such that N N (1.3) E A1+δ +E A−δ +E(N1+δ) < ∞. i i (cid:16)Xi=1 (cid:17) (cid:16)Xi=1 (cid:17) Lyons and Pemantle [29] established a recurrence vs. transience criterion for random walks on general trees; applied to the special setting of Galton–Watson trees, it says that (1.2) 2 ensures that the biased walk (X ) is P-a.s. recurrent. Menshikov and Petritis [35] gave n another proofoftherecurrence bymeans ofMandelbrot’s multiplicative cascades, assuming some additional integrability condition. The proofs of the recurrence in both [29] and [35] required an extra exchangeability assumption on (A , ··· , A ), which turned out to be 1 N superfluous, and was removed by Faraud [15], who furthermore proved that (X ) is null n recurrent under (1.2). Introduced by Lyons andPemantle [29]asanextension ofdeterministically biasedwalks studied in Lyons [26]-[27], randomly biased walks on trees have received much research interest. Deep results were obtained by Lyons, Pemantle and Peres [31] and [32], who also raised further open problems. Often motivated by these results and problems, both the transient regimes ([1], [2]) and the recurrent regimes ([6], [7], [15], [16], [18], [19]) have been under intensive study for these walks. For a general account of biased walks on trees, we refer to [33], [37] and [41]. ← We add a special vertex, denoted by ∅, which is the parent of ∅, and assume that ← (ω(∅, y), |y| = 1 or y = ∅) is independent of other random vectors (ω(x, y), y ∼ x) for x ∈ T\{∅}, having the same distribution as any of these random vectors; whenever the ← ← biased walk (X ) hits ∅, it comes back to ∅ in the next step. [However, ∅ is not considered i ← as a vertex of T; so, for example, f(x) does not contain the term f(∅).] This makes x∈T the presentation of our model more pleasant, since the family of i.i.d. random vectors A(x) P also includes the element A(∅) from now on. It was proved in [16] that under (1.2) and (1.3), almost surely upon the survival of the system, 1 8 lim max |X | = , n→∞ (logn)3 0≤i≤n i 3π2σ2 where N (1.4) σ2 := E A (logA )2 ∈ (0, ∞). i i (cid:16)Xi=1 (cid:17) We are interested in the typical size of |X |; a natural question is to find a determin- n istic sequence a → ∞ such that |Xn| converges in law to some non-degenerate limit. In n an dimension 1 (which would be an informal analogue of the case N(x) = 1 for all x), the slow movement was discovered by Sinai [42] who showed that Xn converges weakly to a (logn)2 non-degenerate limit under the annealed measure. More precisely, Sinai [42] developed the seminal “method of valley” to localize the walk around the bottom of a certain Brownian valley with high probability. This method, however, seems hopeless to be directly adapted 3 to the biased walk on trees. Observe that in terms of the invariant measure, we can in- terpret Sinai’s method of valley as the approximation of the law of the walk by a certain invariant measure whose mass is concentrated at the neighbourhood of the bottom. Our main result, stated as Theorem 2.1 below, asserts that upon the survival of the system, the (quenched) finitely-dimensional distribution of the biased walk can be approximated by the product measure of some invariant probability measures. A consequence of this result is that under (1.2) and (1.3), for all x > 0, σ2|X | x 1 1 lim P n ≤ x survival = P η ≤ dy, n→∞ (logn)2 (2πy)1/2 y1/2 (cid:16) (cid:12) (cid:17) Z0 (cid:16) (cid:17) where σ is the constant in (1.4), a(cid:12)nd η := sup [m(s)−m(s)]. Here, (m(s), s ∈ [0, 1]) (cid:12) s∈[0,1] is a standard Brownian meander,3 and m(s) := sup m(u). u∈[0,s] We mention that ∞ 1 P(η ≤ 1 )dy = 1 because E(1) = (π)1/2, see [21]. 0 (2πy)1/2 y1/2 η 2 In the next section, we give a precise statement of Theorem 2.1, as well as an outline R of its proof. 2 Random potential, and statement of results The movement of the biased random walk (X ) depends strongly on the random en- n vironment ω. It turns out to be more convenient to quantify the influence of the random environment via the random potential, which we define by V(∅) := 0 and ← ω(y, y) (2.1) V(x) := − log , x ∈ T\{∅}, ← ⇐ ω(y, y) y∈]]∅,x]] X where ⇐y is the parent of ←y, and ]]∅, x]] := [[∅, x]]\{∅}, with [[∅, x]] denoting the set of vertices (including x and ∅) on the unique shortest path connecting ∅ to x. Throughout the paper, we use x (for 0 ≤ i ≤ |x|) to denote the ancestor of x in the i-th generation; in i particular, x = ∅ and x = x. As such, the potential V in (2.1) can also be written as 0 |x| |x|−1 ω(x , x ) ← V(x) = − log i i+1 , x ∈ T\{∅}. (x := ∅) ω(x , x ) −1 i i−1 i=0 X The random potential process (V(x), x ∈ T) is a branching random walk, in the usual sense of Biggins [9]. There exists an obvious bijection between the random environment ω and the random potential V. 3RecallthatthestandardBrownianmeandercanberealizedasfollows: m(s):= |B(g+s(1−g))|,s∈[0, 1], (1−g)1/2 where (B(t), t∈[0, 1]) is a standard Brownian motion, with g:=sup{t≤1: B(t)=0}. 4 In terms of the random potential, assumptions (1.2) and (1.3) become, respectively, (2.2) E e−V(x) = 1, E V(x)e−V(x) = 0, (cid:16)x:X|x|=1 (cid:17) (cid:16)x:X|x|=1 (cid:17) and 1+δ (2.3) E e−(1+δ)V(x) +E eδV(x) +E 1 < ∞. (cid:16)x:X|x|=1 (cid:17) (cid:16)x:X|x|=1 (cid:17) h(cid:16)x:X|x|=1 (cid:17) i Werefer fromnowonto(2.2)or(2.3) insteadofto(1.2)or (1.3). Inthelanguageof branch- ing random walks, (2.2) corresponds to the “boundary case” (Biggins and Kyprianou [12]). The branching random walk in this case is known, under some additional integrability assumptions, to have some highly non-trivial universality properties. We are often interested in properties upon the system’s non-extinction, so let us intro- duce P∗(·) := P(·|non-extinction), P∗(·) := P(·|non-extinction). Let us define a symmetrized version of the potential: 1 (2.4) U(x) := V(x)−log( ), x ∈ T. ← ω(x, x) We call U the symmetrized potential, and use frequently the following relation between U and V: 1 (2.5) e−U(x) = e−V(x) = e−V(x) + e−V(y), x ∈ T. ← ω(x, x) y∈TX:←y=x We now introduce a pair of fundamental martingales associated with the potential V. Assumption (2.2) immediately implies that (W , n ≥ 0) and (D , n ≥ 0) are martingales n n under P, where (2.6) W := e−V(x), n x:|x|=n X (2.7) D := V(x)e−V(x), n ≥ 0, n x:|x|=n X In the literature, (W ) is referred to as an additive martingale, and (D ) a derivative n n martingale. Since (W ) is a non-negative martingale, it converges P-a.s. to a finite limit; n under assumption (2.2), this limit is known (Biggins [10], Lyons [28]) to be 0: (2.8) W → 0, P∗-a.s. n 5 [We will see in (4.2) the rate of convergence.] In view of (2.5), this yields (2.9) inf U(x) → ∞, P∗-a.s. x:|x|=n For the derivative martingale (D ), it is known (Biggins and Kyprianou [11], A¨ıd´ekon [4]) n that (2.3) is “slightly more than” sufficient to ensure that D converges P-a.s. to a limit, n denoted by D , and that ∞ D > 0, P∗-a.s. ∞ For an optimal condition (of LlogL-type) for the positivity of D , see the recent work of ∞ Chen [14]. The two martingales (D ) and (W ) are asymptotically related; see Section 4. n n The basic idea is to add a reflecting barrier at (notation: ]]∅, x[[ := ]]∅, x]]\{x}) (2.10) L := x : eV(z)−V(x) > r, eV(z)−V(y) ≤ r, ∀y ∈]]∅, x[[ , r n z∈X]]∅,x]] z∈X]]∅,y]] o where r > 1 is a parameter.4 We mention that L does not necessarily separate ∅ from in- r finity: our assumptions (2.2) and (2.3) do not exclude the existence of r > 1 and a sequence of vertices x := ∅ < x < x ··· with |x | = i, i ≥ 0, such that n eV(xi)−V(xn) ≤ r for 0 1 2 i i=1 all n ≥ 1. P If r = r(n) := n with γ < 1, then we will see from Lemma 5.1 that with P∗- (logn)γ probability going to 1 (for n → ∞), the biased walk does not hit any vertex in L in the r first n steps.5 As such, it makes no significant difference if we add a reflecting barrier at L . An advantage, with the presence of the reflecting barrier at L , for any r > 1, is r r that the biased walk becomes positive recurrent under the quenched probability P , and ω ← its invariant probability π is as follows: π (∅) := 1 , and for x ∈ T, r r Zr 1 e−U(x), if x < L , Zr r (2.11) π (x) := r 1 e−V(x), if x ∈ L , Zr r where Z is the normalizing factor:6 r (2.12) Z := 1+ e−U(x) + e−V(x). r x∈TX:x<Lr xX∈Lr 4That is, each time the biased walk (X ) hits any vertex x∈L , it moves back to ←x in the next step. i r 5Actually γ < 2 will do the job (by Theorem 2.8). However, in Section 6, when we start proving our mainresults, only Lemma 5.1 is available,whichsaysthat γ <1 suffices. The proofof Theorem2.8 comes afterwards, in Section 7. 6By x<L , we mean eV(z)−V(y) ≤r for all vertex y ∈]]∅, x]]. r z∈]]∅,y]] P 6 We extend the definition of π to the whole tree T by letting π (x) := 0 if neither x < L r r r nor x ∈ L . r Due to the periodicity of the walk (X ), we divide the tree T into T(even) and T(odd) with i T(even) := {x ∈ T : |x| is even}, T(odd) := {x ∈ T : |x| is odd}. Depending on the parity of n, the law of X (starting from ∅) is supported either n ← ← by T(even) or by T(odd) ∪ {∅}. Note that π (T(even)) = π (T(odd) ∪ {∅}) = 1 as π (·) is r r 2 r the invariant probability measure of a finite Markov chain of period 2. We define a new probability measure: for any r > 1, 2π (·)1 (·), if ⌊r⌋ is even, r T(even) (2.13) π (·) := r 2πr(·)1T(odd)∪{←∅}(·), if ⌊r⌋ is odd. e ← For any pair of probability measures µ and ν on T∪ {∅}, we denote by d (µ,ν) the tv distance in total variation: 1 d (µ,ν) := |µ(x)−ν(x)|. tv 2 x∈XT∪{←∅} The main result of the paper is as follows. Theorem 2.1. Assume (2.2) and (2.3). Then d P {X ∈ •}, π → 0, in P∗-probability. tv ω n n (cid:16) (cid:17) More generally, for any κ ≥ 1 and 0 <et1 < t2 < ··· < tκ ≤ 1, κ d P {(X , ··· ,X ) ∈ •}, π → 0, in P∗-probability. tv ω ⌊t1n⌋ ⌊tκn⌋ tin (cid:16) Oi=1 (cid:17) e As such, X , 1 ≤ i ≤ κ, are asymptotically independent under P . In particular, no ⌊tin⌋ ω aging phenomenon is possible in the scale of linear time. Let us mention that in Theorem 2.1, the dependence of π on t is rather weak. tin i As Lemma 2.2 below shows, d (π , π ) → 0 in P∗-probability, so asymptotically, the tv tin n e influence of t on π shows up only via the parity of ⌊t n⌋. i tin i e 7 Lemma 2.2. For any a ≥ 0, as r → ∞, sup d (π , π ) → 0, in P∗-probability. tv r u u∈[(logrr)a,r] Theorem 2.1 has the following interesting consequence concerning distance between X n and ∅. Corollary 2.3. Assume (2.2) and (2.3). Fix κ ≥ 1 and 0 < t < t < ··· < t ≤ 1. 1 2 κ Under P∗, σ2 |X |, 1 ≤ i ≤ κ, are asymptotically independent and converge in law to (logn)2 ⌊tin⌋ a common non-degenerate limit on (0, ∞) whose density is given by 1 1 P η ≤ 1 , (2πx)1/2 x1/2 {x>0} (cid:16) (cid:17) where σ2 ∈ (0, ∞) is the constant in (1.4), and η := sup [m(s) − m(s)]. Here, s∈[0,1] (m(s), s ∈ [0, 1]) is a standard Brownian meander, and m(s) := sup m(u). u∈[0,s] The distribution of η is easily seen to be absolutely continuous (Section 4), and can be characterised using a result of Lehoczky [25]. For more discussions, see [21]. Very recently, Pitman [38] has succeeded in determining the law of η using a relation between the Brownian meander and the Brownian bridge established by Biane and Yor [8]: η has the Kolmogorov–Smirnov distribution: ∞ (2π)1/2 ∞ (2j +1)2π2 P(η ≤ x) = (−1)ke−2k2x2 = exp − , x > 0. x 8x2 kX=−∞ Xj=0 (cid:16) (cid:17) Theorem 2.1 is proved by means of two intermediate estimates, stated below as Propo- sitions 2.4 and 2.5. The first proposition estimates the local time at the root ∅, whereas the second concerns the local limit probability of the biased walk. For any vertex x ∈ T, let us define n (2.14) L (x) := 1 , n ≥ 1, n {Xi=x} i=1 X which is the (site) local time of the biased walk at x. Proposition 2.4. Assume (2.2) and (2.3). For any ε > 0, L (∅) σ2 (2.15) P n − e−U(∅) > ε → 0, in P∗-probability. ω n 4D logn ∞ n(cid:12) (cid:12) o (cid:12) (cid:12) (cid:12) (cid:12) 8 Moreover, L (∅) σ2 (2.16) E n → e−U(∅), in P∗-probability, ω n 4D logn ∞ (cid:16) (cid:17) Proposition 2.5. Assume (2.2) and (2.3). As n → ∞ along even numbers, σ2 (logn)P (X = ∅) → e−U(∅), in P∗-probability. ω n 2D ∞ We now say a few words about the proof. It turns out that the partition function Z r has a simpler expression. Let L be as in (2.10). Define r (2.17) Y := e−V(x), r x∈TX:x≤Lr with the obvious notation x ≤ L meaning x < L or x ∈ L . r r r Lemma 2.6. Let Y and Z be as in (2.17) and (2.12), respectively. Then Z = 2Y , for r r r r all r > 1. Proof. If x ∈ T is such that x < L , we have Z π (x) = e−U(x) = e−V(x)+ e−V(y). r r r y∈T:←y=x Therefore, P Z π (x) = e−V(x) + e−V(y) r r xX<Lr xX<Lr xX<Lry∈TX:←y=x = e−V(x) + e−V(y), xX<Lr y∈T:X∅<y≤Lr which is e−V(x) + e−V(y) −e−V(∅). Hence x<Lr y≤Lr P P Z π (x) = 2 e−V(x) − e−V(x) −1. r r xX<Lr xX≤Lr xX∈Lr ← Since π is a probability measure, we have π (∅)+ π (x)+ π (x) = 1, so r x<Lr r x∈Lr r ← P P Z = Z π (∅)+ Z π (x)+ Z π (x) r r r r r r r xX<Lr xX∈Lr = 1+ 2 e−V(x) − e−V(x) −1 + e−V(x), h xX≤Lr xX∈Lr i xX∈Lr 9 which is 2 e−V(x). Lemma 2.6 is proved. (cid:3) x≤Lr P So Y is half the partition function under the invariant measure. The following theorem, r which plays an important role in the proof of Proposition 2.4 and Theorem 2.1, describes the asymptotics of Y . r Theorem 2.7. Assume (2.2) and (2.3). Let Y be as in (2.17). We have r Y 2 lim r = D , in P∗-probability, r→∞ logr σ2 ∞ where σ2 ∈ (0, ∞) is the constant in (1.4), and D the P∗-almost sure positive limit of the ∞ derivative martingale (D ) in (2.7). As a consequence, n Z 4 σ2 lim r = D , lim (logr)π (∅) = e−U(∅), in P∗-probability. r→∞ logr σ2 ∞ r→∞ r 4D∞ Finally, the following general estimate allows us to justify the presence of a barrier at L . r Theorem 2.8. Assume (2.2) and (2.3). Let (a ) be a deterministic sequence of positive n real numbers such that lim an = 0, then n→∞ (logn)2 n lim P {X ∈ L } = 0, n→∞ i rn (cid:16)i[=1 (cid:17) where r := n . n an The rest of the paper is organized as follows: • Section 3, environment: preliminaries on branching random walks. • Section 4, environment: proof of Theorem 2.7. • Section 5, biased walk: preliminaries on hitting barriers and local times. • Section 6, biased walk: proof of Proposition 2.4. • Section 7, biased walk: proof of Theorem 2.8. • Section 8, biased walk: proof of Proposition 2.5. • Section 9, biased walk: proofs of Lemma 2.2, Theorem 2.1 and Corollary 2.3. Some comments on the organization are in order. In the next two sections, we study the behaviour of the random environment, starting in Section 3 by recalling some known 10