ebook img

Component sizes of the random graph outside the scaling window PDF

0.16 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Component sizes of the random graph outside the scaling window

7 Component sizes of the random graph outside the 0 0 scaling window 2 n a Asaf Nachmias and Yuval Peres∗ J 1 February 2, 2008 1 ] R P Abstract . h We provide simple proofs describing the behavior of the largest t a component of the Erdo˝s-R´enyi random graph G(n,p) outside of the m scaling window, p= 1+ǫ(n) where ǫ(n) 0 but ǫ(n)n1/3 . n → →∞ [ 2 1 Introduction v 6 6 Consider the random graph G(n,p) obtained from the complete graph on 4 n vertices by retaining each edge with probability p and deleting each edge 0 with probability 1 p. We denote by the j-th largest component. Let 1 j − C 6 ǫ(n)beanon-negative sequencesuchthatǫ(n) 0andǫ(n)n1/3 . The 0 following theorems, proved by Bollob´as [4] an→d L uczak [8] usin→g d∞ifferent / h methods, describe the behavior of the largest components when p is outside t a the “scaling-window”. m : Theorem 1 [Subcritical phase] If p(n)= 1−ǫ(n) then for any η > 0 and v n integer ℓ > 0 we have i X r P |Cℓ| 1 >η 0, a (cid:16)(cid:12)2ǫ(n)−2log(nǫ(n)3) − (cid:12) (cid:17) → (cid:12) (cid:12) as n . (cid:12) (cid:12) → ∞ Theorem 2 [Supercritical phase] If p(n) = 1+ǫ(n) then for any η > 0 n we have 1 P |C | 1 > η 0, (cid:16)(cid:12)2nǫ(n) − (cid:12) (cid:17) → (cid:12) (cid:12) ∗Microsoft Research and U.C(cid:12). Berkeley. R(cid:12)esearch of both authors supported in part by NSFgrants #DMS-0244479 and #DMS-0104073 1 and for any integer ℓ > 1 we have ℓ P |C | 1 > η 0, (cid:16)(cid:12)2ǫ−2(n)log(nǫ3) − (cid:12) (cid:17) → (cid:12) (cid:12) as n . (cid:12) (cid:12) → ∞ The proofs of these theorems in [4] and [8] are quite involved and use the detailed asymptotics from [14], [4] and [3] for the number of graphs on k vertices with k + ℓ edges. The proofs we present here are simple and require no hard theorems. The main advantage, however, of these proofs is their robustness. In a companion paper [12] we use similar methods to analyze critical percolation on a random regular graphs. In this case, the enumerative methods employed in [4] and [8] are not available. Thephasetransition intheErd˝os-R´enyi randomgraphsG(n,p)happens when p = c. Namely, with high probability, if c > 1 then is linear in n, n |C1| and if c < 1 then is logarithmic in n. When c 1 the situation is more 1 delicate. In [9], L |uCcz|ak, Pittel and Wierman prov∼e that for p = 1+λn−1/3, n the law of n−2/3 converges to a positive non-constant distribution which 1 |C | in [1] is identified as the longest excursion length of some Brownian motion with variable drift. See [11] for a recent account of the case p = 1+λn−1/3 n with simple proofs. Thus, is not concentrated and is roughly of size n2/3 if p = 1+λn−1/3. |C1| n However, if ǫ(n) a sequence such that n1/3ǫ(n) and p = 1+ǫ(n) then as → ∞ n stated in Theorems 1 and2, the size of the largest component in G(n,p) 1 |C | isconcentrated. Insummary,G(n,p)hasascalingwindowoflengthn−1/3 in which the percolation is “critical” in the sense that is not concentrated. 1 |C | 2 The exploration process We recall an exploration process, due to Karp and Martin-Lo¨f (see [7] and [10]), in which vertices will be either active, explored or neutral. After the completion of step t 0,1,...,n we will have precisely t explored vertices ∈ { } and the number of the active and neutral vertices is denoted by A and N t t respectively. Fix an ordering of the vertices v ,...,v . In step t = 0 of the process, 1 n { } we declare vertex v active and all other vertices neutral. Thus A = 1 and 1 0 N = n 1. In step t 1,...,n , if A > 0 let w be the first active 0 t−1 t − ∈ { } vertex; if A = 0, let w be the first neutral vertex. Denote by η the t−1 t t 2 number of neutral neighbors of w in G(n,p), and change the status of these t vertices to active. Then, set w itself explored. t Denote by the σ-algebra generated by η ,...,η . Observe that t 1 t F { } given the random variable η is distributed as Bin(N 1 ,p) Ft−1 t t−1− {At−1=0} and we have the recursions N = N η 1 , t n, (1) t t−1− t− {At−1=0} ≤ and A +η 1, A > 0 At = (cid:26) ηt,−1 t− At−1 = 0, t n. (2) t t−1 ≤ As every vertex is either neutral, active or explored, N = n t A , t n. (3) t t − − ≤ At each time j n in which A = 0, we have finished exploring a j ≤ connected component. Hence the random variable Z defined by t t−1 Z = 1 , t {Aj=0} Xj=1 counts thenumberof components completely explored by theprocessbefore time t. Define the process Y by Y = 1 and t 0 { } Y = Y +η 1. t t−1 t − By(2)wehavethatY = A Z ,i.e. Y countsthenumberofactive vertices t t t t − at step t minus the number of components completely explored before step t. At each step we marked as explored precisely one vertex. Hence, the component of v has size min t 1 : A = 0 . Moreover, let t < t ... 1 t 1 2 { ≥ } be the times at which A = 0; then (t ,t t ,t t ,...) are the sizes of tj 1 2 − 1 3 − 2 the components. Observe that Z = Z +1 for all t t +1,...,t . t tj ∈ { j j+1} Thus Y = Y 1 and if t t +1,...,t 1 then A > 0, and thus tj+1 tj − ∈ { j j+1− } t Y < Y . By induction we conclude that A = 0 if and only if Y < Y tj+1 t t t s for all s < t, i.e. A = 0 if and only if Y has hit a new record minimum t t { } at time t. By induction we also observe that Y = (j 1) and that for tj − − t t + 1,...t we have Z = j. Also, by our previous discussion j j+1 t ∈ { } for t t + 1,...t we have min Y = Y = (j 1), hence by ∈ { j j+1} s≤t−1 t tj − − induction we deduce that Z = min Y +1. Consequently, t s≤t−1 t − A = Y min Y +1. (4) t t s −s≤t−1 3 Lemma 3 For all p 2 there exists a constant c > 0 such that for any ≤ n integer t > 0, P N n 5t e−ct. t ≤ − ≤ (cid:16) (cid:17) Proof. Let α t be a sequence of i.i.d. random variables distributed { i}i=1 as Bin(n,p). It is clear that we can couple η and α so η α for all i, and i i i i ≤ thus by (1) t N n 1 t α . (5) t i ≥ − − − Xi=1 The sum t α is distributed as Bin(nt,p) and p 2 so by Large Devia- i=1 i ≤ n tions (seeP[2] section A.14) we get that for some fixed c > 0 t P α 3t e−ct, i ≥ ≤ (cid:16)Xi=1 (cid:17) which together with (5) concludes the proof. 2 3 The subcritical phase Before beginning the proof of Theorem 1 we require some facts about pro- cesses with i.i.d. increments. Fix some small ǫ > 0 and let p = 1−ǫ for some m integer m > 1. Let β be a sequence of random variables distributed as j { } Bin(m,p). Let W be a process defined by t t≥0 { } W = 1, W = W +β 1. 0 t t−1 t − Let τ be the hitting time of 0, τ = min W = 0 . t t { } By Wald’s lemma we have that Eτ = ǫ−1. Further information on the tail distribution of τ is given by the following lemma. Lemma 4 There exists constant C ,C ,c ,c > 0 such that for all T > ǫ−2 1 2 1 2 we have P(τ T) C1 ǫ−2T−3/2e−(ǫ2−c21ǫ3)T , ≥ ≤ (cid:16) (cid:17) and P(τ T) c1 ǫ−2T−3/2e−(ǫ2+c22ǫ3)T . ≥ ≥ (cid:16) (cid:17) Furthermore, Eτ2 = O(ǫ−3). 4 We will use the following proposition due to Spitzer (see [13]). Proposition 5 Let a ,...,a Z satisfy k−1a = 1. Then there is 0 k−1 ∈ i=0 i − precisely one j 0,...,k 1 such that for Pall r 0,...,k 2 ∈ { − } ∈ { − } r a 0. (j+i) mod k ≥ Xi=0 Proof of Lemma 4. By Proposition 5, P(τ = t) = 1P(W = 0). As t t t β is distributed as a Bin(mt,p) random variable we have j=1 j P mt P(W = 0) = pt−1(1 p)m−(t−1). t (cid:18)t 1(cid:19) − − Replacingt 1withtintheaboveformulaonlychangesitbyamultiplicative − constantwhichisalways between1/2and2. Astraightforwardcomputation using Stirling’s approximation gives 1 tm 1 ǫ t(m−1) P(W = 0) = Θ t−1/2(1 ǫ)t 1+ 1 − . (6) t − m 1 − m n (cid:16) (cid:17) (cid:16) (cid:17) o − m m−1 Denote q = (1 ǫ) 1+ 1 1 1−ǫ , then − m−1 − m (cid:16) (cid:17) (cid:16) (cid:17) 1 P(τ T)= P(τ = t)= P(W = 0) = Θ t−3/2qt . t ≥ t Xt≥T Xt≥T (cid:16)Xt≥T (cid:17) This sum can be bounded above by qT T−3/2 qt =T−3/2 , 1 q Xt≥T − and below by 2T qT(1 qT) t−3/2qt (2T)−3/2 − . ≥ 1 q Xt=T − Observe that as m we have that q tends to (1 ǫ)eǫ. By expanding eǫ → ∞ − we find that ǫ2 ǫ2 q = (1 ǫ)(1+ǫ+ )+Θ(ǫ3)= 1 +Θ(ǫ3). − 2 − 2 5 Using this and the previous bounds on P(τ T) we get the first assertion ≥ of the Lemma. The second assertion follows from the following computation. By (6) we have that for some constant C > 0 Eτ2 = t2P(τ =t) = tP(W = 0) C √tqt. t ≤ Xt≥1 Xt≥1 Xt≥1 Thus, by direct computation (or by [6], section XIII.5, Theorem 5) 1 3/2 Eτ2 O = O(ǫ−3). ≤ 1 q (cid:16) (cid:17) − 2 Proof of Theorem 1. We begin with an upper bound. Recall that com- ponent sizes are t t for some j > 0 where t are record minima of the j+1 j j − process Y . For a vertex v denote by C(v) the connected component of t { } G(n,p) which contains v. We first bound P(C(v ) > T) where 1 | | T = 2(1+η)ǫ−2log(nǫ3). Recall that C(v ) = min Y = 0 . Couple Y with a process W as 1 t t t t | | { } { } { } in Lemma 4, which has increments distributed as Bin(n,p) 1 such that − Y W for all t. Define τ as in Lemma 4. As p = 1−ǫ and T > ǫ−2, by t ≤ t n Lemma 4 we have P(τ > T) Cǫ(nǫ3)−(1+η)log(nǫ3)−3/2, ≤ for some fixed C > 0. Our coupling implies that P(C(v ) > T) P(τ > 1 | | ≤ T). Denote by X the number of vertices v such that C(v) > T. If > T 1 | | |C | then X > T. Also, for any two vertices v and u by symmetry we have that C(v) and C(u) are identically distributed. We conclude that | | | | EX nP(C(v ) > T) 1 P( > T) P(X >T) = | | 1 |C | ≤ ≤ T T C nǫ(nǫ3)−(1+η)(1−C2ǫ)log−3/2(nǫ3) 1 (nǫ3)−η(1−C2ǫ)+C2ǫ 0. ≤ 2(1+η)ǫ−2log(nǫ3) ≤ → We now turn to prove a lower bound. Write T = 2(1 η)ǫ−2log(nǫ3), − 6 and define the stopping time ηǫn γ = min t : N n . t { ≤ − 8 } Recall that t aretimesinwhichA = 0andalsoY isarecordminimum { j} tj tj (j) for Y . For each integer j let W be a process with increments dis- { t} { t } tributedasBin(n ηǫn,p)wherethestartingpointisW(j) = Y = (j 1). − 8 0 tj − − (j) Note that if t < γ then we can couple Y and W such that j+1 { t} { t } Y W for all t [t ,t ]. Define the stopping times τ by tj+t ≥ t ∈ j j+1 { j} (j) τ = min t :W = j . j { t − } Take N = ǫ−1(nǫ3)(1−η8) . l m We will prove that with high probability t < γ and that there exists N k < k < ... < k < N such that τ > T. Note that these two events 1 2 ℓ ki imply that > T. Indeed, by Lemma 3 we have ℓ |C | ηǫn P γ e−cǫn. (7) ≤ 40 ≤ (cid:16) (cid:17) Byboundingtheincrementsof Y abovebyvariablesdistributedasBin(n,p) t { } − 1 we learn by Wald’s Lemma (see [5]) that E[t t ] ǫ−1, hence j+1 j EtN ǫ−2(nǫ3)(1−η8). We conclude that − ≤ ≤ P(tN > ηǫn) 40ǫ−2(nǫ3)(1−η8) = 40(nǫ3)−η8 , (8) 40 ≤ ηǫn η which goes to 0 as ǫn−1/3 tends to . In Lemma 4 take m = n ηǫn and ∞ − 8 (1−ǫ)(1−ηǫ) 1−(1+η)ǫ note that p = 8 8 , and so Lemma 4 gives that for any j m ≥ m P(τj >T) c1ǫ(nǫ3)−(1+η8)2(1−η)(1+c2ǫ)log−3/2(ǫ3n) ǫ(nǫ3)−(1−η4). ≥ ≥ Let X be the number of j N such that τ > T. Then we have j ≤ EX Nǫ(nǫ3)−(1−η4) C(nǫ3)η8 , ≥ ≥ → ∞ hence by Large Deviations (see [2], section A.14) for any fixed integer ℓ > 0 we have η P X < ℓ e−c(nǫ3)8 , ≤ (cid:16) (cid:17) 7 for some fixed c> 0. By our previous discussion, this together with (7) and (8) gives (nǫ3)−η8 P( < T) O . ℓ |C | ≤ η (cid:16) (cid:17) 2 4 The supercritical phase In this section we denote ξ = η 1. We first prove some Lemmas. t t − Lemma 6 If p = 1+ǫ then for all t < 3ǫ(n)n n EA = O(ǫt+√t), (9) t and EZ = O(ǫt+√t). (10) t Proof. Write T = 3ǫn. We will use (4). First observe that as η can t always be bounded above by a Bin(n,p) random variable we can bound Eξ ǫ for all t and hence EY ǫt. Denote by τ the stopping time t t ≤ ≤ τ = min t :N n 15ǫn . By definition of η we have t t { ≤ − } E[ξ ]= pN p1 1. t | Ft−1 t−1− {At−1=0}− As N is a decreasing sequence, we deduce that as long as t < τ, we t { } have E[ξ ] > Dǫ for D > 0 large enough. Hence, the process t t−1 | F − Dǫj Y t∧τ isasubmartingaleforanyt. ByDoob’smaximalL2 inequality { − j}j=0 we have E[max(Dǫj Y )2] 4E[(Dǫ(t τ) Y )2]. (11) j t∧τ j≤t∧τ − ≤ ∧ − For any j < τ the random variable η can be stochastically bounded j from below by a Bin(n 15ǫn,p) random variable and above by a Bin(n,p) − random variable. Hence for any k < j < τ we have E[ξ Dǫ ] = O(ǫ), j k (cid:12) − |F (cid:12) (cid:12) (cid:12) and so (cid:12) (cid:12) E[(ξ Dǫ)(ξ Dǫ)] = O(ǫ2). j k − − 8 We conclude that as long as t < τ t t E[(Dǫt Y )2] 2 E[(ξ Dǫ)(ξ Dǫ)]+ E[(ξ Dǫ)2] = O(ǫ2t2+t). t j k j − ≤ − − − Xk<j Xj=1 Lemma 3 implies that for n large enough, 1 P N n 15ǫn e−3cǫn , (12) T ≤ − ≤ ≤ n2 (cid:16) (cid:17) and as N is a decreasing sequence we deduce that P(τ T) n−2. t { } ≤ ≤ Hence for any t T ≤ E[(Dǫt Y )2] E[(Dǫ(t τ) Y )21 ]+O(n2)P(t τ) t t∧τ {t<τ} − ≤ ∧ − ≥ = O(ǫ2t2+t). We deduce by (11) and Jensen inequality that for any t T ≤ E[min(Y Dǫj)] = O(ǫt+√t), j j≤t − hence E[min Y ] = O(ǫt+√t) and so by (4) we obtain (9). Inequality j≤t j (10) follows immediately from the relation Z = A Y . 2 t t t − Lemma 7 If p = 1+ǫ then for all t < 3ǫ(n)n n EN =n(1 p)t+O(ǫ2n), (13) t − and t Eξ =ǫ +O(ǫ2). (14) t − n Proof. Observe that by (1) we have that E[N ] = (1 p)N (1 p)1 . t |Ft−1 − t−1− − {At−1=0} By iterating this relation we get that EN = n(1 p)t + O(EZ ) which t t − by Lemma 6 yields (13) (observe that for t = 3ǫn we have ǫt > √t by our assumption on ǫ). Since E[ξ ] = pN p1 1, t | Ft−1 t−1− {At−1=0}− by taking expectations and using (13) we get 9 1+ǫ Eξ = (1+ǫ)(1 )t 1+O(ǫ2) t − n − t = (1+ǫ)(1 (1+ǫ)t/n) 1+O(ǫ2)= ǫ +O(ǫ2), − − − n where we used the fact that (1 x)t = 1 tx+O(t2x2). 2 − − Proof of Theorem 2. Write T = 3ǫn and ξ∗ = E[ξ ]. The process j j |Fj−1 t M = Y ξ∗, t t− j Xj=1 is a martingale. By Doob’s maximal L2 inequality we have that E(maxM2)) 4EM2. t≤T t ≤ T AsM hasorthogonalincrementswithboundedsecondmomentweconclude t that EM2 = O(T), hence, by Jensen’s inequality we have T t E max Y ξ∗ O(√T)= O(√ǫn). (15) h t≤T (cid:12)(cid:12) t−Xj=1 j(cid:12)(cid:12)i ≤ (cid:12) (cid:12) As ξ∗ = pN p1 1 by (3) we have j j−1− {Aj−1=0}− E ξ∗ Eξ = pE A +1 EA E1 . | j − j| | j−1 {Aj−1=0}− j−1− {Aj−1=0}| By the triangle inequality and Lemma 6 we conclude that for all j T ≤ E ξ∗ Eξ p O(ǫj + j), | j − j| ≤ · p and hence for any t T ≤ E ξ∗ Eξ p O(ǫt2+t3/2) O(ǫ3n). | j − j| ≤ · ≤ hXj≤t i By the triangle inequality we get t E max (ξ∗ Eξ ) O(ǫ3n). (16) h t≤T (cid:12)(cid:12)Xj=1 j − j (cid:12)(cid:12)i ≤ (cid:12) (cid:12) Using the triangle inequality, (15), (16) and Markov inequality gives 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.