ebook img

Diffusivity of Rescaled Random Polymer in Random Environment in dimensions 1 and 2 PDF

0.17 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Diffusivity of Rescaled Random Polymer in Random Environment in dimensions 1 and 2

DIFFUSIVITY OF RESCALED RANDOM POLYMER IN RANDOM ENVIRONMENT IN DIMENSIONS 1 AND 2 ZI SHENGFENG 2 1 0 Abstract. Weshowrandom polymerisdiffusiveindimensions 1and2 inprobability in 2 anintermediatescalingregime. Thescaleisβ=o(N−1/4)ind=1andβ=o((logN)−1/2) n in d=2 as N →∞. a J 0 3 ] 1. Introduction R P h. Consider walks ω : [0,N] Z → Zd such that ω(0) = 0, |ω(n)−ω(n−1)| = 1. Let PN be 0 t uniform measure on the space of these walks each with weight (2d)−N, then a T m 1 p (N,x) := PN(ω(N) = x) = 1 dPN(ω) = [ 0 0 [ω(N)=x] 0 (2d)N 1 Z ω: ωX(N)=x v is probability of the nearest neighbor simple random walk starting at 0 is at site x at time 5 N. 1 2 6 Let the random environment be given by h = {h(n,x) : n ∈ N,x ∈ Zd}, a sequence . of independent identically distributed random variables with h(n,x) = ±1 with equality 1 0 probability on some probability space (H,G,Q), which are also independent of the simple 2 random walk. We denote expectation over the environment space by E . Q 1 : v We define the (unnormalized) polymer density by i X r p(N,x) = 1[ω(N)=x] [1+cN,dh(n,ω(n))]dP0N(ω) a Z 1≤n≤N Y where c 1 is such that N,d (1.1) lim c2 N1/2 = 0 for d = 1; lim c2 logN = 0 for d= 2 N,1 N,2 N→∞ N→∞ Since the polymer density is not normalized, to obtain the probability of the polymer at time N is at site x, we define p (N,x) = p(N,x)/Z(N) N where Z(N) is the partition function Z(N)= p(N,x) = [1+c h(n,ω(n))]dPN(ω) N,d 0 x Z 1≤n≤N X Y 1For example, we may take c = N−(1/4+ǫ) and c = logN−(1/2+ǫ) for any ǫ > 0. Also the scale N,1 N,2 β =o(N−1/4) for d=1 is first identified in [1] 1 2 ZISHENGFENG In this paper, we show the mean square displacement of the polymer when scaled by N converges to 1 in probability in both d= 1,2. Precisely, let hω(N)2i = x2p (N,x), N,h x N Theorem 1.1. With rescaling of the polymer density by cN,d, for d = 1,2,P hω(N)2i N,h → 1 N in probability as N → ∞. Wenotethathω(N)2i = K(N),whereK(N) = [1+β h(n,ω(n))]ω(N)2dPN(ω). N,h Z(N) 1≤n≤N N,d 0 Toshowtheresult,wearegoingtoestimatesecondmomentofthetopandbottomquantity, R Q and find that Proposition 1.2. For d = 1, N N n n i) E (Z(N)2)≤ c c2 N1/2 ; ii) E (K(N)2) ≤ N2 c c2 N1/2 Q 1 N,1 Q 1 N,1 nX=0(cid:16) (cid:17) nX=0(cid:16) (cid:17) for some constant c that depends only on the dimension. 1 Proposition 1.3. For d = 2, N N i) E (Z(N)2) ≤ c c2 logN n; ii) E (K(N)2) ≤ N2 c c2 logN n Q 2 N,2 Q 2 N,2 n=0 n=0 X(cid:0) (cid:1) X(cid:0) (cid:1) for some constants c that depends only on the dimension. 2 Thepaperis organized as follows. Insection 2, wewriteoutsecond moments of thetop and bottom quantity in the mean square displacement of the polymer. In Section 3, we show Proposition 1.2 and Theorem 1.1 for dimension d = 1. In Section 4, we show Proposition 1.3 and Theorem 1.1 for dimension d= 2. In Section 5, we show some other results. 2. Second Moment Expansions In this section, we are going to write out the second moments of the top and bottom quantity in the mean square displacement of the polymer. Lemma 2.1. N n E (Z2(N)) = c2n p2(i −i ,x −x ) Q N,d 0 k k−1 k k−1 nX=01≤i1<X···<in≤N x1X,...,xnkY=1 Proof. By definition, Z = [1+c h(n,ω(n))]dPN(ω). Upon expanding, N 1≤n≤N N,d 0 RNQ n Z = cn h(i ,ω(i ))dPN(ω) N N,d k k 0 Z nX=01≤i1<X···<in≤N kY=1 Let f (ω) = cn n h(i ,ω(i )) and g = f (ω)dPN(ω), we see n 1≤i1<···<in≤N N,d k=1 k k n n 0 P Z2 = (g Q+g +···+g )2 = R g g N 0 1 N n m 0≤n,m≤N X RANDOM POLYMER 3 For n 6= m, we have n m E g g = E cn h(i ,ω(i ))dPN(ω) cm h(i′,ω′(i′))dPN(ω′) Q n m Q N,d k k 0 N,d l l 0 Z 1≤i1<X···<in≤N kY=1 Z 1≤i′1<X···<i′m≤N Yl=1 Note that if there is some i that is different from all other i′’s (or vice versa), then by k l independence of the h(n,x)’s and that they have mean 0, we have n m E h(i ,ω(i ))h(i′,ω′(i′)) = 0 Q k k l l k=1l=1 YY By Fubini, E g g = 0. But since n 6= m, the i ’s and i′’s cannot all be matched in pairs, Q n m k l so there must be some i different from all other i′’s (or vice versa). k l On the other hand, for n= m, we have n n E g2 = E cn h(i ,ω(i ))dPN(ω) cn h(i′,ω′(i′))dPN(ω′) Q n Q N,d k k 0 N,d k k 0 Z 1≤i1<X···<in≤N kY=1 Z 1≤i′1<X···<i′n≤N kY=1 n = E c2n h(i ,ω(i ))h(i ,ω′(i ))dPN(ω)dPN(ω′) Q N,d k k k k 0 0 Z Z 1≤i1<X···<in≤N kY=1 n + E c2n h(i ,ω(i ))h(i′,ω′(i′))dPN(ω)dPN(ω′) Q N,d k k k k 0 0 Z Z il6=i′l forsoXme l ∈{1,...,n} kY=1 n = EQ c2Nn,d 1ω(ik)=ω′(ik)dP0N(ω)dP0N(ω′) Z Z 1≤i1<X···<in≤N kY=1 n + E c2n h(i ,ω(i ))h(i′,ω′(i′))dPN(ω)dPN(ω′) Q N,d k k k k 0 0 Z Z il6=i′l forsoXme l ∈{1,...,n} kY=1 n = c2Nn,d 1ω(ik)=ω′(ik)dP0N(ω)dP0N(ω′) Z Z 1≤i1<X···<in≤N kY=1 Thirdequality follows becauseh2(n,x) = 1andnonzerocontribution onlycomes fromwhen all sites ω(i ) and ω′(i ) are matched in pairs. Fourth equality follows because if the i ’s k k α were to match perfectly with the i′ ’s for α 6= β, then we would get contradiction in terms β of the order of the times. For example, take n = 3 and the perfect cross matching i = i′, 1 2 i = i′, i = i′, then by i < i < i we would have i′ < i′ < i′, which is a contradiction. 2 3 3 1 1 2 3 2 3 1 Now, we are going to write out the integrals as sums in terms of the transition probabilities of the two independent walks. By above, we have N E (Z2(N)) = c2n 1 dPN(ω)dPN(ω˜) Q N,d [ω(i1)=ω˜(i1),...,ω(in)=ω˜(in)] 0 0 nX=01≤i1<X···<in≤N Z Z N = c2n 1 N,d [ω(i1)=x1,...,ω(in)=xn] nX=01≤i1<X···<in≤N Z x1,X...,xn,x ×PN (ω˜(i ) = x ,...,ω˜(i )= x ,ω˜(N) =x)dPN(ω) 0 1 1 n n 0 4 ZISHENGFENG N = c2n 1 N,d [ω(i1)=x1,...,ω(in)=xn] nX=01≤i1<X···<in≤N Z x1X,...,xn n × p (i −i ,x −x ) p (N −i ,x−x )dPN(ω) 0 k k−1 k k−1 0 n n 0 k=1 x Y X where in the first equality we also need to sum over sites at time N because PN is measure 0 for walks of length N, and in the last equality we use the fact that increments of the walk are independent, and the walk is spatial homogeneous, i.e. probability of the walk starting at y and ending at x is same as probability of the walk starting at 0 and ending at y −x. Next we note that p (N −i ,x−x ) = 1 because p (n,x) is a transition probability. x 0 n n 0 P Combining above and expanding similarly for the second walk we thus have shown Lemma 2.1. (cid:3) Lemma 2.2. N n E (K2(N)) = c2n p2(i −i ,x −x ) Q N,d 0 k k−1 k k−1 nX=01≤i1<X···<in≤N x1X,...,xnkY=1 2 × x2p (N −i ,x−x ) 0 n n ! x X Proof. To estimate second moment of K(N), we have E (K2(N)) = E [1+c h(n,ω(n))][1+c h(n,ω˜(n))] Q Q N,d N,d Z Z 1≤n≤N Y ×ω(N)2ω˜(N)2dPN(ω)dPN(ω˜) 0 0 As we see, the only difference between E (Z2(N)) and E (K2(N)) is the extra term Q Q ω(N)2ω˜(N)2, and we proceed as before to expand the second moment to get N c2n 1 ω(N)2ω˜(N)2dPN(ω)dPN(ω˜) N,d [ω(i1)=ω˜(i1),...,ω(in)=ω˜(in)] 0 0 nX=01≤i1<X···<in≤N Z Z N = c2n 1 N,d [ω(i1)=x1,...,ω(in)=xn] nX=01≤i1<X···<in≤N Z x1,X...,xn,x ×x2PN (ω˜(i )= x ,...,ω˜(i ) = x ,ω˜(N) = x)ω(N)2dPN(ω) 0 1 1 n n 0 N = c2n 1 N,d [ω(i1)=x1,...,ω(in)=xn] nX=01≤i1<X···<in≤N Z x1X,...,xn n × p (i −i ,x −x ) x2p (N −i ,x−x )ω(N)2dPN(ω) 0 k k−1 k k−1 0 n n 0 k=1 x Y X N n = c2n p2(i −i ,x −x ) N,d 0 k k−1 k k−1 nX=01≤i1<X···<in≤N x1X,...,xnkY=1 × x2p (N −i ,x−x ) y2p (N −i ,y−x ) 0 n n 0 n n x y X X RANDOM POLYMER 5 (cid:3) 3. Diffusivity of Rescaled Random Polymer in d = 1 In this section, we are going to show Proposition 1.2 and Theorem 1.1 for dimension d= 1. The key ingredient we need is that the transition probability p (n,x) has the following 0 estimate by the Gaussian density, more precisely, for d ≥ 1, x ∈ Zd such that x +···+ 1 x +n≡ 0 mod 2, then d d d/2 d|x|2 (3.2) p (n,x)= 2 exp − +r (x) 0 n 2πn 2n (cid:18) (cid:19) (cid:18) (cid:19) where |r (x)| ≤ min c n−(d+2)/2,c′|x|−2n−d/2 for some constants c ,c′ that depend only n d d d d on the dimension. (See Theorem 1.2.1 in [2]) (cid:0) (cid:1) Section 3.1. In this subsection, we are going to show Proposition 1.2 i) in a series of lemmas. Lemma 3.1. n n p2(i −i ,x −x ) ≤ cn (i −i )−1/2 0 k k−1 k k−1 1 k k−1 x1X,...,xnkY=1 kY=1 for some constant c that depends only on the dimension d= 1 1 Proof. For d= 1, since e−x2 ≤ 1 for all x, we see (3.2) is at most c n−1/2 for some constant 1 c . Using this uniform estimate for each of the p (i −i ,x −x )’s, we have 1 0 k k−1 k k−1 n p2(i −i ,x −x ) = p2(i ,x )··· p2(i −i ,x −x ) 0 k k−1 k k−1 0 1 1 0 n n−1 n n−1 x1X,...,xnkY=1 Xx1 Xxn n ≤ cni−1/2···(i −i )−1/2 p (i ,x )··· p (i −i ,x −x ) =cn (i −i )−1/2 1 1 n n−1 0 1 1 0 n n−1 n n−1 1 k k−1 Xx1 Xxn kY=1 for some constant c that depends only on the dimension d = 1 (last equality follows from 1 that p (n,x) is a transition probability). (cid:3) 0 Lemma 3.2. n n cnc2n (i −i )−1/2 ≤ c c2 N1/2 1 N,1 k k−1 1 N,1 1≤i1<X···<in≤N kY=1 (cid:16) (cid:17) Proof. n cnc2n (i −i )−1/2 1 N,1 k k−1 1≤i1<X···<in≤N kY=1 N−(n−1) N−1 N = cnc2n ··· i−1/2···(i −i )−1/2 (i −i )−1/2 1 N,1 1 n−1 n−2 n n−1 iX1=1 in−1=Xin−2+1 in=Xin−1+1 6 ZISHENGFENG N−(n−1) N−1 ≤ cnc2n ··· i−1/2···(i −i )−1/22N1/2 1 N,1 1 n−1 n−2 iX1=1 in−1=Xin−2+1 Last inequality holds because N k−1/2 ≤ 1+ N x−1/2dx = 1+2 N1/2−1 ≤ 2N1/2. k=1 1 Continuing from above and arguing similarly to estimate each sum in the expression we P R (cid:0) (cid:1) have N−(n−1) N−1 n ≤ cnc2n N1/2 ··· i−1/2···(i −i )−1/2 ≤ c c2 N1/2 1 N,1 1 n−1 n−2 1 N,1 iX1=1 in−1=Xin−2+1 (cid:16) (cid:17) where the constant c will change from line to line (again it depends only on the dimension 1 d = 1). (cid:3) We conclude by Lemma 2.1 that Proposition 1.2 i) holds. Section 3.2. In this subsection, we are going to show Proposition 1.2 ii) in a series of lemmas. By standard computations of the moments of simple random walk of length n in dimension d = 1 using characteristic function, we have x2p (n,x) = n; x4p (n,x) = 3n2−2n 0 0 x x X X Lemma 3.3. x2p (N −i ,x−x )= (N −i )+x2 0 n n n n x X Proof. x2p (N −i ,x−x )= x2p (N −i ,x−x )= (x+x )2p (N −i ,x) 0 n n 0 n n n 0 n Xx xX−xn Xx = x2p (N −i ,x)+ x2p (N −i ,x) = (N −i )+x2 0 n n 0 n n n x x X X where first equality holds because summation over all x’s is same as summation over all x−x ’sandthirdequalitybecauseholdsanyoddmomentofsimplerandomwalkvanish. (cid:3) n Lemma 3.4. x4p (i −i ,x −x ) k 0 k k−1 k k−1 Xxk = 3(i −i )2−2(i −i )+6x2 (i −i )+x4 k k−1 k k−1 k−1 k k−1 k−1 Proof. x4p (i −i ,x −x ) = x4p (i −i ,x −x ) = (x +x )4p (i −i ,x ) k 0 k k−1 k k−1 k 0 k k−1 k k−1 k k−1 0 k k−1 k Xxk xk−Xxk−1 Xxk = x4p (i −i ,x )+ 6x2x2 p (i −i ,x )+ x4 p (i −i ,x ) k 0 k k−1 k k k−1 0 k k−1 k k−1 0 k k−1 k Xxk Xxk Xxk = 3(i −i )2−2(i −i )+6x2 (i −i )+x4 k k−1 k k−1 k−1 k k−1 k−1 RANDOM POLYMER 7 (cid:3) Lemma 3.5. n p (i −i ,x −x ) (N −i )2+2(N −i )x2 +x4 0 k k−1 k k−1 n n n n x1X,...,xnkY=1 (cid:0) (cid:1) n n = (N −i )2+2(N −i )i +3 (i −i )2−2i +6 (i −i )i n n n k k−1 n k k−1 k−1 k=1 k=1 X X Proof. We do this by induction on n. For n = 1, we have p (i ,x ) (N −i )2+2(N −i )x2+x4 0 1 1 1 1 1 1 Xx1 (cid:0) (cid:1) = (N −i )2 p (i ,x )+2(N −i ) x2p (i ,x )+ x4p (i ,x ) 1 0 1 1 1 1 0 1 1 1 0 1 1 Xx1 Xx1 Xx1 = (N −i )2+2(N −i )i +3i2−2i 1 1 1 1 1 (Note we do not have term of the form 6 n (i −i )i because i = 0.) k=1 k k−1 k−1 0 P Suppose equality holds for n−1. Then n p (i −i ,x −x ) (N −i )2+2(N −i )x2 +x4 0 k k−1 k k−1 n n n n x1X,...,xnkY=1 (cid:0) (cid:1) n−1 = p (i −i ,x −x ) p (i −i ,x −x ) (N −i )2+2(N −i )x2 +x4 0 k k−1 k k−1 0 n n−1 n n−1 n n n n x1.X..,xn−1kY=1 Xxn (cid:0) (cid:1) n−1 = p (i −i ,x −x )[(N −i )2+2(N −i )(i −i +x2 ) 0 k k−1 k k−1 n n n n−1 n−1 x1.X..,xn−1kY=1 +3(i −i )2−2(i −i )+6x2 (i −i )+x4 ] n n−1 n n−1 n−1 n n−1 n−1 n−1 = p (i −i ,x −x )([(N−i )2+2(N−i )(i −i )+3(i −i )2−2(i −i )] 0 k k−1 k k−1 n n n n−1 n n−1 n n−1 x1.X..,xn−1kY=1 +[2(N −i )+6(i −i )]x2 +x4 ) n n n−1 n−1 n−1 = [(N−i )2+2(N−i )(i −i )+3(i −i )2−2(i −i )]+[2(N−i )+6(i −i )]i n n n n−1 n n−1 n n−1 n n n−1 n−1 n−1 n−1 +3 (i −i )2−2i +6 (i −i )i k k−1 n−1 k k−1 k−1 k=1 k=1 X X n−1 = (N −i )2+2(N −i )(i −i +i )+3(i −i )2+3 (i −i )2 n n n n−1 n−1 n n−1 k k−1 k=1 X n−1 −2(i −i +i )+6(i −i )i +6 (i −i )i n n−1 n−1 n n−1 n−1 k k−1 k−1 k=1 X n n = (N −i )2+2(N −i )i +3 (i −i )2−2i +6 (i −i )i n n n k k−1 n k k−1 k−1 k=1 k=1 X X where in the second equality we use Lemmas 3.3 and 3.4 and in the fourth equality we use the inductive hypothesis. (cid:3) 8 ZISHENGFENG Lemma 3.6. n n (N −i )2+2(N −i )i +3 (i −i )2−2i +6 (i −i )i ≤ 100nN2 n n n k k−1 n k k−1 k−1 k=1 k=1 X X Proof. n n (N −i )2+2(N −i )i +3 (i −i )2−2i +6 (i −i )i n n n k k−1 n k k−1 k−1 k=1 k=1 X X n n = N2−2Ni +i2 +2Ni −2i2 +3 (i2 −2i i +i2 )−2i +6 (i i −i2 ) n n n n k k k−1 k−1 n k k−1 k−1 k=1 k=1 X X ≤ N2+2N2+N2+2N2+2N2+3n(N2+2N2+N2)+2N2 +6n(N2+N2) = 10N2 +24nN2 ≤ 100nN2 (cid:3) Lemma 3.7. n 2 n p2(i −i ,x −x ) x2p (N −i ,x−x ) ≤ cnN2 (i −i )−1/2 0 k k−1 k k−1 0 n n 1 k k−1 ! x1X,...,xnkY=1 Xx kY=1 Proof. Using the uniform estimate (3.2) for d = 1 on the transition probability as in the proof of Lemma 3.1, we get n 2 p2(i −i ,x −x ) x2p (N −i ,x−x ) 0 k k−1 k k−1 0 n n ! x1X,...,xnkY=1 Xx n ≤ cni−1/2(i −i )−1/2···(i −i )−1/2 p (i −i ,x −x ) (N −i )+x2 2 1 1 2 1 n n−1 0 k k−1 k k−1 n n x1X,...,xnkY=1 (cid:0) (cid:1) n = cni−1/2(i −i )−1/2···(i −i )−1/2 p (i −i ,x −x ) 1 1 2 1 n n−1 0 k k−1 k k−1 x1X,...,xnkY=1 × (N −i )2+2(N −i )x2 +x4 n n n n = cn1i(cid:0)1−1/2(i2 −i1)−1/2···(in −in−1)−(cid:1)1/2 n n × (N −i )2+2(N −i )i +3 (i −i )2−2i +6 (i −i )i n n n k k−1 n k k−1 k−1 ! k=1 k=1 X X n ≤ cnN2 (i −i )−1/2 1 k k−1 k=1 Y where in the first inequality we also use Lemma 3.3 to compute the second moment, in the second equality we use Lemma 3.5 and in the last inequality we use Lemma 3.6. (cid:3) Lemma 3.8. n n cnc2n N2 (i −i )−1/2 ≤ N2 c c2 N1/2 1 N,1 k k−1 1 N,1 1≤i1<X···<in≤N kY=1 (cid:16) (cid:17) Proof. It follows from Lemma 3.2. (cid:3) RANDOM POLYMER 9 We conclude by Lemma 2.2 that Proposition 1.2 ii) holds. Section 3.3. In this subsection, we are going to use Proposition 1.2 to show Theorem 1.1 for d = 1. We do so with a series of lemmas. Lemma 3.9. N n E (Z(N)−1)2 ≤ c c2 N1/2 Q 1 N,1 (cid:0) (cid:1) nX=1(cid:16) (cid:17) Proof. E (Z(N)−1)2 = E Z2(N)−2Z(N)+1 Q Q = E (Z2(N))−2E (Z(N))+1 (cid:0) (cid:1) Q(cid:0) Q (cid:1) = E (Z2(N))−2+1 Q = E (Z2(N))−1 Q N n = c2n p2(i −i ,x −x ) N,1 0 k k−1 k k−1 nX=11≤i1<X···<in≤N x1X,...,xnkY=1 N n ≤ c c2 N1/2 1 N,1 nX=1(cid:16) (cid:17) where in the third equality we use E (Z(N)) = 1 because the h(n,x) have mean 0, in the Q fifth equality we use that the 0-th term in the second moment expansion of Z(N) is 1 (see Lemma 2.1) and the last inequality follows from Proposition 1.2 i). (cid:3) Lemma 3.10. N n E (K(N)−N)2 ≤ N2 c c2 N1/2 Q 1 N,1 (cid:0) (cid:1) nX=1(cid:16) (cid:17) Proof. E (K(N)−N)2 = E (K2(N)−2K(N)N +N2) Q Q = E (K2(N))−2NE (K(N))+N2 (cid:0) (cid:1) Q Q = E (K2(N))−2N2 +N2 Q = E (K2(N))−N2 Q N n = c2n p2(i −i ,x −x ) N,1 0 k k−1 k k−1 nX=11≤i1<X···<in≤N x1X,...,xnkY=1 2 × x2p (N −i ,x−x ) 0 n n ! x X N n ≤ N2 c c2 N1/2 1 N,1 Xn=1(cid:16) (cid:17) where in the third equality we use E (K(N)) = N because the h(n,x) has mean zero and Q second moment of simple random walk of length N in dimension d = 1 is N, in the fifth 10 ZISHENGFENG equality we use that the 0-th term in the second moment expansion of K(N) is N2 (see Lemma 2.2) and the last inequality follows from Proposition 1.2 ii). (cid:3) Lemma 3.11. Z(N) → 1 in probability as N → ∞ Proof. For any ǫ > 0, by Chebyshev’s inequality and using Lemma 3.9, we have n EQ((Z(N)−1)2) Nn=1 c1c2N,1N1/2 P(|Z(N)−1| > ǫ) ≤ ≤ ǫ2 P (cid:16) ǫ2 (cid:17) Let f(N) = c c2 N1/2. By choice of c (see (1.1)), lim c2 N1/2 = 0, in particular 1 N,1 N,1 N→∞ N,1 lim f(N) = 0, which says given δ > 0 small, there exists K such that for N ≥ K, N→∞ f(N)< δ (note that f(N)≥ 0), but then N N δ−δN+1 δ S := f(N)n < δn = < N 1−δ 1−δ n=1 n=1 X X Since δ > 0 is arbitrary and for large N, S is arbitrarily small, so S → 0 as N → ∞. We N N conclude Z(N)→ 1 in probability as N → ∞. (cid:3) Lemma 3.12. 1 → 1 in probability as N → ∞ Z(N) Proof. By Lemma 3.11, for ǫ > 0 small such that 1−ǫ > 0 and given δ > 0, there exists K such that for N ≥ K, P(|Z(N)−1|≤ ǫ) = 1−P(|Z(N)−1| > ǫ) > 1−δ. But P(|Z(N)−1| ≤ ǫ) = P(1−ǫ ≤ Z(N)≤ 1+ǫ) 1 1 1 = P ≤ ≤ 1+ǫ Z(N) 1−ǫ (cid:18) (cid:19) 1 = P 1−ǫ′′ ≤ ≤ 1+ǫ′ Z(N) (cid:18) (cid:19) 1 ≤ P 1−ǫˆ≤ ≤ 1+ǫˆ Z(N) (cid:18) (cid:19) wherefirstequalityholdsbecauseweassumeǫ > 0issmallsuchthat1−ǫ > 0,andǫ′ = 1 − 1−ǫ 1 > 0, ǫ′′ = 1− 1 > 0, ǫˆ= max{ǫ′,ǫ′′}, so 1−δ ≤ P 1−ǫˆ≤ 1 ≤ 1+ǫˆ and we have 1+ǫ Z(N) P | 1 −1| > ǫ → 0 for all ǫ > 0 small. But we no(cid:16)te for ǫ′ > ǫ, P | 1(cid:17)−1| > ǫ′ ≤ Z(N) Z(N) P (cid:16)| 1 −1| > ǫ(cid:17), so P | 1 −1| > ǫ → 0 holds for any ǫ > 0. (cid:16)Thus 1 → 1(cid:17)in Z(N) Z(N) Z(N) pr(cid:16)obability as N →(cid:17) ∞. (cid:16) (cid:17) (cid:3) Lemma 3.13. K(N) → 1 in probability as N → ∞ N Proof. For any ǫ > 0, by Chebyshev inequality and using Lemma 3.10, we have n K(N) EQ((K(N)−N)2) N2 Nn=1 c1c2N,1N1/2 P −1 > ǫ = P(|K(N)−N| > Nǫ)≤ ≤ N ǫ2N2 P (cid:16)ǫ2N2 (cid:17) (cid:18)(cid:12) (cid:12) (cid:19) (cid:12) (cid:12) n As b(cid:12)efore since(cid:12) lim N c c2 N1/2 = 0, we see K(N) → 1 in probability as (cid:12) (cid:12) N→∞ n=1 1 N,1 N N → ∞ (cid:16) (cid:17) (cid:3) P

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.