ebook img

A Nonconventional Invariance Principle for Random Fields PDF

0.26 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A Nonconventional Invariance Principle for Random Fields

A NONCONVENTIONAL INVARIANCE PRINCIPLE FOR RANDOM FIELDS 2 YURIKIFER 1 0 2 INSTITUTEOFMATHEMATICS THEHEBREWUNIVERSITYOFJERUSALEM n a J Abstract. In[17]weobtained anonconventional invarianceprinciple(func- 3 tional central limit theorem) for sufficiently fast mixing stochastic processes 2 withdiscreteandcontinuous time. Inthispaperwederiveanonconventional invarianceprincipleforsufficientlywellmixingrandomfields. ] R P . h 1. Introduction t a m Nonconventional ergodic theorems (see [12]) known also after [3] as poli- nomial ergodic theorems studied the limits of expressions having the form [ 2 1/N Nn=1Tq1(n)f1···Tqℓ(n)fℓ where T is a weakly mixing measure preserving transPformation, f ’s are bounded measurable functions and q ’s are polynomials v i i taking on integer values on the integers. Originally, these results were motivated 2 5 by applications to multiple recurrence for dynamical systems taking functions fi 7 being indicators of some measurable sets. Later such results were extended to the 5 case when q ’s are polinomials on Zν (see [18]) and to some Zν actions (see [2]). i . 1 Using the language of probability this kind of results may be called nonconven- 0 tionallawsoflargenumbersandasanaturalfollowupwearrivedattheinvariance 1 principle (functional central limit theorem) in [17] showing convergence in distri- 1 bution to Gaussian processes for expressions of the form : v i (1.1) 1/√N F X(q1(n)),...,X(qℓ(n)) F¯ X − 0≤Xn≤Nt(cid:0) (cid:0) (cid:1) (cid:1) r a where X(n),n 0 is a sufficiently fast α,ρ or ψ-mixing vector valued process with ≥ some moment conditions and stationarity properties, F is a continuous function with polinomial growth and certain regularity properties, F¯ = Fd(µ µ), ×···× µ is the distribution of eachX(n), q (n)=jn, j k ℓ and q ,Rj =k+1,...,ℓ are j j ≤ ≤ positivefunctions takingonintegervaluesonintegerswithsomegrowthconditions which are satisfied, for instance, when q ’s are polynomials of growing degrees. i Thegoalofthispaperistoproveaninvarianceprincipletyperesultwhenn Zν ismultidimensional. Thiscanbedoneeitherbyconsideringfunctionsq :Zν ∈Z i + → withX(n),n 0 being againavectorvaluedstochasticprocessor,moregenerally, considering m≥aps q : Zν Zν taking now X(n),n Zν to be a vector valued i → ∈ Date:January24,2012. 2000 Mathematics Subject Classification. Primary: 60F17Secondary: 60G60. Key words and phrases. randomfields,limittheorems,mixing. 1 2 Yu.Kifer random field which will be our setup in this paper. Namely, for t = (t ,...,t ) 1 ν ∈ [0,1]ν and a positive integer N we consider expressions of the form (1.2) ξ (t)=N−ν/2 F X(q (n)),...,X(q (n)) F¯ N 1 ℓ − X (cid:0) (cid:0) (cid:1) (cid:1) n=(n1,...,nν):0≤ni≤Nti∀i where X(n),n Zν is a sufficiently well mixing vector valued random field, with somemomentc∈onditionsandstationarityproperties,F andF¯ aresimilartoabove, q (n) = jn, j k and q : Zν Zν, i = k+1,...,ℓ map Zν = n = (n ,...,n ) Zjν : n 0, i≤=1,...,ν iinto it→self. Assumingsome growth+cond{itions of1 q , iν>∈k i i ≥ } | | in n we will show that the random field ξ (t) converges in distribution to a N | | Gaussian random field on [0,1]ν. In [17] we were able to obtain the latter result for one dimensional n relying on martingaleapproximationsandmartingalelimittheoremsbutforrandomfieldsthis machinery is not readily available. Still, we are able to combine some of mixingale technique from [19] and [20] together with an appropriate grouping of summands in (1.2) in order to obtain both convergence of finite dimensional distributions and the tightness of infinite dimensional ones. Other known methods which work successfullywhenprovinglimittheoremsforrandomfields(see,forinstance,[5],[7], [8] and [21] ) rely one way or another on characteristic functions (or other devices based on weak dependence) which are hard to deal with in the nonconventional setup as demonstrated in [15] in view of the strong dependence of the summands in (1.2) on the far away members of the random field. For specific lattice models with sufficiently good mixing properties to fit our setup we refer the reader to [1] and references there. 2. Preliminaries and main results Our setup consists of a ℘-dimensional vector random field X(n), n Zν, X(n) R℘ on a probability space (Ω, ,P) and of a family o{f σ-algebra∈s , Γ∈ Z}ν such that if Γ F∆ Zν. It is often convenient to Γ Γ ∆ F ⊂ F ⊂ F ⊂ F ⊂ ⊂ measure the dependence between two sub σ-algebras , via the quantities G H⊂F (2.1) ̟ ( , )=sup E g Eg : g is measurable and g 1 q,p p q G H {k |G − k H− k k ≤ } (cid:0) (cid:1) wherethesupremumistakenoverrealfunctionsand istheLr(Ω, ,P)-norm. r k·k F Thenmorefamiliarα,ρ,φandψ-mixing(dependence)coefficientscanbeexpressed via the formulas (see [6], Ch. 4 ), α( , )= 1̟ ( , ), ρ( , )=̟ ( , ) G H 4 ∞,1 G H G H 2,2 G H φ( , )= 1̟ ( , ) and ψ( , )=̟ ( , ). G H 2 ∞,∞ G H G H 1,∞ G H We set also (2.2) ̟ (r)= sup Γ ∆−1̟ ( , ) q,p q,p Γ ∆ | ∪ | F F Γ,∆:dist(Γ,∆)≥r(cid:0) (cid:1) where Γ and ∆ are finite nonempty subsets of Zν, dist(Γ,∆)= inf n n˜ n∈Γ,n˜∈∆ | − | and we write Γ for cardinality of a set Γ while, as usual, for numbers or vectors | | will denote their absolute values or lengths. As shown in [10] imposing decay |·| conditionsondependence coefficientswhichdonottakeintoaccountsizesofsets Γ and ∆ as in (2.2) would exclude from our setup simple examples of Gibbs random Randomfields 3 fields. Define also 1 α(l)= ̟ (l), ρ(l)=̟ (l), φ(l)=̟ (l) and ψ(l)=̟ (l). ∞,1 2,2 ∞,∞ 1,∞ 4 Our setup includes also conditions on the approximation rate (2.3) β (r)= sup X(n) E X(n) p n∈Zνk − (cid:0) |FUr(n)(cid:1)kp where U (n) = n˜ Zν : n n˜ r is the r-neghborhood on n in Zν. Further- r more, we do no{t re∈quire s|tat−iona|r≤ity}of the random field X(n),n Zν assuming ∈ only that the distribution of X(n) does not depend onn and the joint distribution of X(n),X(n′) depends only on n n′ which we write for further references by { } − (2.4) X(n) d µ and (X(n),X(n′)) d µ for all n,n′ n−n′ ∼ ∼ d where Y Z means that Y and Z have the same distribution. Next, ∼let F = F(x ,...,x ), x R℘ be a function on R℘ℓ such that for some 1 ℓ j ι,K >0,κ (0,1] and all x ,y R∈℘,i=1,...,ℓ, we have i i ∈ ∈ ℓ ℓ ℓ (2.5) F(x ,...,x ) F(y ,...,y ) K 1+ x ι+ y ι x y κ 1 ℓ 1 ℓ j j j j | − |≤ | | | | | − | (cid:0) Xj=1 Xj=1 (cid:1)Xj=1 and ℓ (2.6) F(x ,...,x ) K 1+ x ι . 1 ℓ j | |≤ | | (cid:0) Xj=1 (cid:1) Our assumptions on F enable us to include, for instance, products F(x ,...,x ) = 1 ℓ x x x ,where x =(x ,...,x ) Rℓ, whichis sometimesuseful. Tosimplify 11 22 ℓℓ i i1 iℓ ··· ∈ formulas we assume a centering condition (2.7) F¯ = F(x ,...,x )dµ(x ) dµ(x )=0 Z 1 ℓ 1 ··· ℓ which is not really a restriction since we always can replace F by F F¯. − Our goal is to prove an invariance principle (functional central limit theorem) for ξ (t), t [0,1]ν defined by (1.2) where q (n)=jn if j =1,2,...,k ℓ for some N j givenpositiv∈eintegersk,ℓandifk <ℓthenq : Zν Zν, j =k+1,...,≤ℓsatisfythe j → conditions below. Before we formulate them observe that already the case k =ℓ is ofmajorinterestandweaddq′s withj >k whichgrowfasterthanlinearlymostly j for the sake of completeness which under appropriate assumptions does not cause substantial problems. We assume that q (n) < q (n) < < q (n) whenever 1 2 ℓ n = 0 and n Zν = m = (m ,...,m| ) |Zν |: m | 0··i· . F|urthe|rmore, we | | 6 ∈ + { 1 ν ∈ i ≥ ∀ } assume that for k+1 i ℓ, ≤ ≤ (2.8) inf q (n˜) q (n) (n˜ n)−1 >0, i i |n˜|>|n|(cid:0)| |−| |(cid:1) | |−| | (2.9) lim inf q (n˜) q (n) n˜ n = , i i |n|→∞n˜:n˜6=n(cid:0)| − |−| − |(cid:1) ∞ (2.10) q (n)=q (n˜) if n=n˜, lim min q (n) q(n) n = i i i l 6 6 |n|→∞ l<i (cid:0)| |−| |−| |(cid:1) ∞ and for any ε>0, (2.11) lim inf min q (n˜) q (n) n˜ n = . i l |n|→∞n˜:|n˜|≥ε|n| l<i (cid:0)| |−| |−| − |(cid:1) ∞ 4 Yu.Kifer For each θ >0 set (2.12) γθ = X θ =E X(n)θ = xθdµ. θ k kθ | | Z | | Our main result relies on 2.1. Assumption. With d = (ℓ 1)℘ there exist p,q 1, m 4 and δ > 0 with − ≥ ≥ δ κ, pκ>d satisfying ≤ ∞ (2.13) l5ν̟ (l)=θ(p,q)< , q,p ∞ X l=0 ∞ (2.14) r5νβδ(r)< , q ∞ X r=0 1 1 ι+1 δ (2.15) γ < , γ < with + + . m 2qι ∞ ∞ 2 ≥ p m q In order to give a detailed statement of our main result as well as for its proof it will be essential to represent the function F =F(x ,x ,...,x ) in the form 1 2 ℓ (2.16) F =F (x )+ +F (x ,x ,...,x ) 1 1 ℓ 1 2 ℓ ··· where for i<ℓ, (2.17) F (x ,...,x )= F(x ,x ,...,x ) dµ(x ) dµ(x ) i 1 i 1 2 ℓ i+1 ℓ ··· R F(x ,x ,...,x )dµ(x ) dµ(x ) 1 2 ℓ i ℓ − ··· R and F (x ,x ,...,x )=F(x ,x ,...,x ) F(x ,x ,...,x )dµ(x ) ℓ 1 2 ℓ 1 2 ℓ −Z 1 2 ℓ ℓ which ensures, in particular, that (2.18) F (x ,x ,...,x ,x )dµ(x ) 0 x ,x ,...,x . Z i 1 2 i−1 i i ≡ ∀ 1 2 i−1 We writet=(t ,...,t ) s=(s ,...,s )ift s forall iandfor suchs,t [0,1]ν 1 ν 1 ν i i we set ∆ (s,t) = n =≥ (n ,...,n ) Zν :≥Ns n Nt i and ∆∈ (t) = N 1 ν i i i N { ∈ ≤ ≤ ∀ } ∆ (0,t). These together with (2.16)–(2.18) enable us to represent ξ (t) given by N N (1.2) in the form k ℓ (2.19) ξ (t)= ξ (it)+ ξ (t) N i,N i,N X X i=1 i=k+1 where for 1 i k, ≤ ≤ (2.20) ξ (t)=N−ν/2 F (X(n),X(2n),...,X(in)) i,N i X n∈∆N(t/i) and for i k+1, ≥ (2.21) ξ (t)=N−ν/2 F (X(q (n)),...,X(q (n))). i,N i 1 i X n∈∆N(t) Next, we define a matrix (D , 1 i,j k) which appears in the limiting covari- i,j ≤ ≤ ances formula in our main result below. For any i,j k set ≤ υ (2.22) D = c (u) i,j i,j ij uX∈Zν Randomfields 5 where υ is the greatest common divisor of i and j, c (u)=0 if υ does not divide i,j all components of u Zν and for i′ =i/υ, j′ =j/υ, ∈ (2.23) c (υu)= F (x ,...,x )F (y ,...,y ) υ dµ (x ,y ) i,j i 1 i j 1 j α=1 αu αi′ αj′ R dµ(x ) Q dµ(y ) σ∈/{i′,2i′,...,υi′} σ σ′∈/{j′,2j′,...,υj′} σ′ Q Q with µ being the diagonal measure, i.e. f(x,y)dµ (x,y)= f(x,x)dµ(x). 0 0 R R 2.2. Theorem. Suppose that the conditions (2.4)–(2.11) and Assumption 2.1 hold true then each random field ξ (t), i = 1,2,...,ℓ converges in distribution as i,N N to a Gaussian random field η (t). Moreover, (η (t),η (t),...,η (t)) is an i 1 2 ℓ → ∞ ℓ-dimensional Gaussian random field such that η (t), i k have covariances i ≤ ν Eη (s)η (t)=D min(s ,t ), i,j k i j i,j l l ≤ Y l=1 with matrix D defined by (2.22) while the random fields η (t), i k + 1 are i,j i ≥ independent of each other and of η ’s with j k and have variances j ≤ E η (t)2 = F (x ,x ,...,x )2dµ(x )dµ(x ) dµ(x ), i k+1. | i | Z | i 1 2 i | 1 2 ··· i ≥ Finally, ξ (t) converges in distribution to a Gaussian random field ξ(t) which can N be represented in the form k ℓ (2.24) ξ(t)= η (it)+ η (t). i i X X i=1 i=k+1 In order to understand our assumptions observe that ̟ is clearly non- q,p increasing in q and non-decreasing in p. Hence, for any pair p,q 1, ≥ ̟ (n) ψ(n). q,p ≤ Furthermore, by the real version of the Riesz–Thorin interpolation theorem or the Riesz convexitytheorem(see [13], Section9.3and[11], Section VI.10.11)whenever θ [0,1], 1 p ,p ,q ,q and 0 1 0 1 ∈ ≤ ≤∞ 1 1 θ θ 1 1 θ θ = − + , = − + p p p q q q 0 1 0 1 then (2.25) ̟ (n) 2(̟ (n))1−θ(̟ (n))θ. q,p ≤ q0,p0 q1,p1 In particular, using the obvious bound ̟ 2 valid for any q p we obtain q1,p1 ≤ 1 ≥ 1 from (2.25) for pairs ( ,1), (2,2) and ( , ) that for all q p 1, ∞ ∞ ∞ ≥ ≥ (2.26) ̟q,p(n) (2α(n))p1−1q, ̟q,p(n) 21+p1−q1(ρ(n))1−p1+q1 ≤ ≤ and ̟q,p(n) 21+p1(φ(n))1−p1. ≤ We observe also that by the Ho¨lder inequality for q p 1 and α (0,p/q), ≥ ≥ ∈ (2.27) β (r) 21−α[β (r)]αγ1−α q ≤ p pq(1−α) p−qα withγ definedin(2.12). Thus,wecanformulateAssumption2.1intermsofmore θ familiar α, ρ, φ, andψ–mixing coefficients andwith various momentconditions. It follows also from (2.25) that if ̟ (n) 0 as n for some q p 1 then q,p → →∞ ≥ ≥ (2.28) ̟ (n) 0 as n for all q >p 1, q,p → →∞ ≥ 6 Yu.Kifer and so (2.28) holds true under Assumption 2.1. In order to prove Theorem 2.2 we will represent ξ (t) in the form i,N Z (l) where now l is one dimensional which together with estimates of 1≤l≤N t,N Pthe next sectionwill enable us to apply centrallimit theorems for mixingale arrays (see [19] and [20]). This will lead to Gaussian one dimensional distributions in the limit but combining this with a kind of the Cram´er-Wold argument, covariances computation in Section 4 and tightness estimates of Section 5 will yield appro- priate Gaussian random fields as asserted in the theorem. Recall (see [17]), that alreadyin the one parameter case ν =1 the process ξ(t), in general,does not have independent increments so also in the random field case ξ(t), in general, is not a multiparameter Brownian motion. 2.3. Remark. As a part of tightness estimates of Section 5 we will see that sup E ξ (t)4 C < . Hence, applying the Borel–Cantelli lemma N≥1,t∈[0,1]ν | i,N | ≤ ∞ we obtain as a byproduct that if S = Nν/2ξ (t) and S = Nν/2 ℓ ξ (t) i,N i,N N i=1 i,N then with probability one P 1 1 lim S (t)=0 for each i, and so lim S (t)=0. N→∞Nν i,N N→∞Nν N Still, we observethat this stronglaw oflarge numbers canbe obtainedunder more general circumstances here since, in particular, we do not need for it convergence of covariances derived in Section 4 which requires, for instance, more specific as- sumptions on q ’s. j 3. Blocks and mixingale type estimates We rely on the following result which appears as Corollary 3.6 in [17]. 3.1. Proposition. Let and be σ-subalgebras on a probability space (Ω, ,P), X and Y be d-dimensionGal ranHdom vectors and f =f(x,ω), x Rd be a collFection ∈ of random variables measurable with respect to and satisfying H (3.1) f(x,ω) f(y,ω) C(1+ xι+ y ι)x y κ and f(x,ω) C(1+ xι) q q k − k ≤ | | | | | − | k k ≤ | | where q 1. Set g(x)=Ef(x,ω). Then ≥ (3.2) E(f(X, ) ) g(X) c(1+ X ι+2 )(̟ ( , )+ X E(X ) δ) k · |G − kυ ≤ k kb(ι+2) q,p G H k − |G kq provided κ d >0, 1 1+1+δ with c=c(C,ι,κ,δ,p,q,υ,d)>0 depending only −p υ ≥ p b q on parameters in brackets. Moreover, let x=(w,z) and X =(W,Z), where W and Z are d and d d -dimensional random vectors, respectively, and let f(x,ω) = 1 1 − f(w,z,ω) satisfy (3.1) in x=(w,z). Set g˜(w)=Ef(w,Z(ω),ω). Then (3.3) E(f(W,Z, ) ) g˜(W) c(1+ X ι+2 ) k · |G − kυ ≤ k kb(ι+2) ̟ ( , )+ W E(W ) δ+ Z E(Z ) δ . × q,p G H k − |G kq k − |H kq (cid:0) (cid:1) We will use the following notations (3.4)F =F (x ,x ,...,x ,ω)=E F (x ,x ,...,x ,X(n)) , i,n,r i,n,r 1 2 i−1 i 1 2 i−1 |FUr(n) (cid:0) (cid:1) Y (q (n))=F (X(q (n)),...,X(q (n))) and Y (m)=0 if m=q (n) for any n, i i i 1 i i i 6 X (n)=E(X(n) ), Y (q (n))=F (X (q (n)), r |FUr(n) i,r i i,qi(n),r r 1 ...,X (q (n)),ω) and Y (m)=0 if m=q (n) for anyn; r i−1 i,r i 6 Y¯(n)=Y (n) EY (n), Y¯ (n)=Y (n) EY (n). i i i i,r i,r i,r − − Randomfields 7 For each l Z introduce cubes (cid:3)(l) = n = (n ,...,n ) Zν : 0 n + 1 ν i ∈ { ∈ ≤ ≤ l for i = 1,...,ν and for l < ˜l we set also Υ(l,˜l) = (cid:3)(˜l) (cid:3)(l). Fix some positive } \ numbers 5 (τ +1)<3η <τ <1 and set a(1)=0,b(1)=1 and for j >1, 11 (3.5) a(j)=b(j 1)+[(j 1)2η], b(j)=a(j)+[jτ] and r(j)=[jη]. − − Set ∆(i)(t)=∆ (t/i) if i k and ∆(i)(t)=∆ (t) if i k+1. We define now N N ≤ N N ≥ (3.6) V (l)= Y (q (n)) i,t,N n∈∆(i)(t)∩Υ(a(l),b(l)) i,r(l) i P N and W (l)= Y (q (n)) i,t,N n∈∆(i)(t)∩Υ(b(l),a(l+1)) i,r(l) i P N ThesetsΥ(b(l),a(l+1))willplaytheroleofgapsbetweenΥ(a(l),b(l))andΥ(a(l+ 1),b(l+1)) and we will see that the randomvariablesW (l) canbe disregarded i,t,N for our purposes while dealing with the random variables V (l) we will take i,t,N advantage of our mixing conditions in order to show that their centered versions V¯ (l)=V (l) EV (l) satify mixingale estimates (see [19] and [20]) with i,t,N i,t,N i,t,N − (i) respect to the nested family of σ-algebras = , l=0,1,2,... where Gl FΓi(l) Γ (l)= n Zν : dist n, q ((cid:3)(b(l))) r(l) i { ∈ + ∪j≤i j ≤ } (cid:0) (cid:1) and we take (i) to be the trivial σ-algebra ,Ω for l <0. Namely, foGrlany u N we have {∅ } ∈ (3.7) E V¯ (l) (i) E(Y¯ (q (n)) (i) ) k i,t,N |Gl−u k2 ≤ n∈∆N(t)∩Υ(a(l),b(l))k i,r(l) i |Gl−u k2 (cid:0) (cid:1) P Υ(a(l),b(l)) max E(Y¯ (q (n)) (i) ) ≤| | n∈Υ(a(l),b(l))k i,r(l) i |Gl−u k2 where A for a set A denotes its cardinality. Next, for u>l, | | (3.8) E(Y¯ (q (n)) (i) )=0 i,r(l) i |Gl−u while for all u 2 by the Cauchy inequality and the contraction property of con- ≥ ditional expectations (3.9) E(Y¯ (q (n)) (i) ) 2 E(Y (q (n)) (i) ) . k i,r(l) i |Gl−u k2 ≤ k i,r(l) i |Gl−2 k2 Observe that if n Υ(a(l),b(l)) then X = X (q (n)),...,X (q (n)) is r(l) 1 r(l) i−1 ∈ (i−1)-measurable and for large l we obtain als(cid:0)o by the definition of q for i (cid:1) k Gl i ≤ and by (2.8) and (2.11) for i>k that (3.10) dist Γ (l 2) Γ (l),q (n) (l 1)τ i i−1 i − ∪ ≥ − (cid:0) (cid:1) taking into account that a(l)> 1 (l 1)1+τ (l 1). We can write also that 1+τ − − − (3.11) 2 b(l) (l+1)1+τ, (cid:3)(l) =(l+1)ν and Υ(˜l,l) ν(l ˜l)(l+1)ν−1 if l>˜l. ≤ 1+τ | | | |≤ − Thus, applying Proposition 3.1 to the right hand side of (3.9) with = G and = together with (3.10), (3.11), Assumption 2.1 FΓi(l−2)∪Γi−1(l) H FUr(l)(qi(n)) and the contraction property of conditional expectations we obtain that (3.12) E(Y (q (n)) (i) ) E(Y (q (n)) ) k i,r(l) i |Gl−2 k2 ≤k i,r(l) i |G k2 C̟ ( , ) C˜lν(τ+η+1)̟ ((l 1)τ lη) p,q p,q ≤ G H ≤ − − 8 Yu.Kifer for some C,C˜ >0 independent of n,l provided p,q and δ satisfy the conditions of Assumption 2.1. Collecting (3.6)–(3.9), (3.11) and (3.12) we obtain that for any u 2, ≥ (3.13) E(V¯ (l) (i) ) C˜2ν(2+τ)lν(2τ+η+2)−1̟ ((l 1)τ lη). k i,t,N |Gl−u k2 ≤ p,q − − In order to incorporate (3.13) into the setup of mixingale arrays from [20] we consider the triangular array Vˆ (l) = N−ν/2V¯ (l), l = 1,2,...,L(N); N = i,t,N i,t,N 1,2,... where L(N)=min j : a(j+1) N . Observe that { ≥ } N [jτ] (1+τ)−1(L(N) 1)1+τ, ≥ ≥ − X 1≤j≤L(N)−1 and so (3.14) L(N) (N(1+τ))1/(1+τ)+1 2N1/(1+τ)+1. ≤ ≤ EmployingLemma4.2fromthenextsectiontogetherwith(2.15),(3.11)and(3.14) we obtain that 2Cν (3.15) kVi,t,N(l)k22 ≤C|Υ(a(l),b(l))|≤ 1+τlτ(l+1)(1+τ)(ν−1) ≤C1Nν−1+1τ forsomeC,C >0independentofl L(N),N,iandt. Observethatbythechoice 1 ≤ of τ and η we have that ν(2τ +η+2) 1 5ντ < 1 which together with (2.13) − − − and (3.13) enables us to write (3.16) E(V¯ (l) (i) ) C l−1 C u−1 k i,t,N |Gl−u k2 ≤ 2 ≤ 2 for some C 1 independent of l,N,i,t and u = 2,3,...,l. Set ψ = 2 j ≥ C2(max(1,j))−1 for j = 0,1,2,... and σl,N = C1N−2(11+τ) for l = 1,2,...,L(N). Then (3.8), (3.15) and (3.16) yield the first standard mixingale estimate (3.17) E(Vˆ (l) (i) ) σ ψ k i,t,N |Gl−u k2 ≤ l,N u for all u = 0,1,2,... and the conditions imposed on ψ ’s and σ ’s in [20] can be j l,N easily verified, as well. The second standardmixingale estimate (see [20]) is trivial in our case since V¯ (l) is (i) -measurable for any u 1, and so i,t,N Gl+u ≥ (3.18) E(V¯ (l) (i) ) V¯ (l)=0. i,t,N |Gl+u − i,t,N Next, we estimate contribution of small blocks (gaps) W (j), j 1. Set i,t,N ≥ Γˆ (l)= n Zν : dist n, q (cid:3)(a(l+1)) r(l) , i { ∈ + ∪j≤i j ≤ } (cid:0) (cid:0) (cid:1)(cid:1) = and = wheren Υ(b(l),a(l+1)). Observethat G FΓˆi−1(l)∪Γˆi(l−2) H FUr(l)(qi(n)) ∈ by the properties of q ’s for any such n and large enough l, j dist(Γˆ (l) Γˆ (l 2),q (n)) lτ. i−1 i i ∪ − ≥ Furthermore, if j l 2 then W (j) is -measurable, and so employing ≤ − i,t,N FΓˆi(l−2) Proposition3.1with such and we obtainsimilarlyto(3.12)relyingalsoonthe G H Cauchy inequality that (3.19) EW (j)Y (q (n)) = E W (j)E(Y (q (n)) ) i,t,N i,r(l) i i,t,N i,r(l) i | | |G (cid:12) (cid:0) (cid:1)(cid:12) C1 Wi,t,N(j) 2(cid:12)lν(τ+η+1)̟q,p(lτ lη) (cid:12) ≤ k k − Randomfields 9 for some C >0 independent of t,N,n,l and j l 2. Hence, by (3.6) and (3.11), 1 ≤ − (3.20) EW (j)W (l) = E W (j)E(W (l) ) i,t,N i,t,N i,t,N i,t,N | | | |G | (cid:0) (cid:1) Υ(b(l),a(l+1)) max E W (j)E(Y (q (n))) ) n∈Υ(b(l),a(l+1)) i,t,N i,r(l) i ≤| | | |G | (cid:0) (cid:1) C W (j) lν(2τ+η+2)+2η−τ−1̟ (lτ lη) 2 i,t,N 2 q,p ≤ k k − (cid:1) for some C >0 independent of t,N,n,l and j l 2. Now we can write 2 ≤ − (3.21) E L(N)W (j) 2 L(N) 3EW2 (j) j=0 i,t,N ≤ j=0 i,t,N (cid:0)P (cid:1) P (cid:0) +2 L(N) EW (j)W (l) L(N)(3 W (j) 2+C W (j) ) l=j−2| i,t,N i,t,N | ≤ j=0 k i,t,N k2 3k i,t,N k2 P (cid:1) P where by (2.13), (3.20) and the choice of τ and η, ∞ C =C lν(2τ+η+2)̟ (lτ lη) C l5ντ̟ (lτ lη)< . 3 2 p,q 2 p,q − ≤ − ∞ X X 1≤l≤L(N) l=1 Relyingon(2.3),(2.5),(2.15)andtheHo¨lderinequalitywecanestimatetheerror ofreplacementofY (q (n))byitsr(l)-approximationY (q (n))(seeLemma3.12 i i i,r(j) i in [17]), (3.22) Y (q (n)) Y (q (n)) C βδ(r(j)) k i i − i,r(j) i k2 ≤ 4 q for some C >0 independent of i,j and n. Now, set 4 ζ (t)=N−ν/2 V (l). i,N i,t,N X 1≤l≤L(N) Then by (3.11), (3.21) and (3.22), (3.23) ξ (t) ζ (t) C N−ν/2 lν(τ+1)−1βδ([lη]) k i,N − i,N k2 ≤ 5 1≤l≤L(N) q (cid:0)P + ( W (l) 2+ W (l) ) 1/2 . 0≤l≤L(N) k i,t,N k2 k i,t,N k2 (cid:0)P (cid:1) (cid:1) It follows from (2.15) and Lemma 4.2 of the next section that (3.24) W (l) 2 =O(Υ(b(l),a(l+1))). k i,t,N k2 | | By Assumption 2.1 and the choice of η and τ we obtain from(2.14), (3.11), (3.14), (3.23) and (3.24) that (3.25) ξ (t) ζ (t) C N−ν/2 lν(τ+1−5η)−1 k i,N − i,N k2 ≤ 6 1≤l≤L(N) (cid:0)P +(2 l(ν−1)(τ+1)+2η)1/2 1≤l≤L(N) P (cid:1) C7 Nν(12−15+ητ)+N22(η1+−ττ) 0 as N , ≤ → →∞ (cid:0) (cid:1) and so for each t the limits in distribution as N of ξ (t) and of ζ (t) i,N i,N → ∞ coincide (if they exist). Observe that similarly to (3.7), (3.12) and (3.13) it follows from (2.13), (3.11) together with the contraction property of conditional expectations that (3.26) Eζ (t) N−ν/2 EV (l) | i,N |≤ 1≤l≤L(N)| i,t,N | P C N−ν/2 lν(2τ+η+2)−1̟ ((l 1)τ lη) ≤ 8 1≤l≤L(N) p,q − − P ≤C9N−ν/2 1≤l≤L(N)lν(−3τ+η+2)−1 ≤C10N−1181ν, P for some C ,C ,C > 0 independent of N, and so Eζ (t) 0 as N . 8 9 10 i,N | | → → ∞ Hence, the Gaussian limiting behavior of ζ (t) Eζ (t) which we will derive i,N i,N − 10 Yu.Kifer via mixingale limit theorems yields the same Gaussianlimiting behavior ofζ (t), i,N and so in view of (3.25) the same holds true for ξ (t), as well. i,N 4. Limiting covariances The first step in our limiting covariancescomputations is the following estimate of b (m,n)=EY (q (m))Y (q (n)), m,n Zν i,j i i j j ∈ + where Y (q (n)) was defined in (3.4). i i 4.1. Lemma. (i) For i,j =1,...,ℓ and any m,n Zν set ∈ + (4.1) sˆ (m,n)=min min q (m) q (n), m i,j 1≤l≤j i l | − | | | (cid:0) (cid:1) and s (m,n)=max(sˆ (m,n), sˆ (n,m)). i,j i,j j,i Then for all i,j k, ≤ 1 (4.2) s (m,n) im jn. i,j ≥ 4k2| − | Furthermore, if i k +1 then for any ε > 0 there exists M > 0 such that if ε ≥ max(m, n) M and m=n then ε | | | | ≥ 6 1 (4.3) s (m,n) min m n +ε−1,max(m, n) m n. i,i ≥ | − | | | | | ≥ 2| − | (cid:0) (cid:1) (ii) There existsafunction h(l) 0definedon integers suchthat ∞ l4νh(l)< ≥ l=1 ∞ and for any i,j =1,2,...,ℓ and l =0,1,2,..., P (4.4) sup b (m,n) h(l). i,j m,n∈Zν+:si,j(m,n)≥l| |≤ Proof. (i) Let i,j k and set u = im jn. Then im +j n u, and so either ≤ | − | | | | | ≥ m u or n u. Suppose, for instance, that m u. If sˆ (m,n) u | | ≥ 2i | | ≥ 2j | | ≥ 2i i,j ≥ 4k2 then s (m,n) u and we are done. So assume that s (m,n) < u . Then i,j ≥ 4k2 i,j 4k2 min im ln < u ,andso im ˆln < u forsomeˆl<j. Thenim ˆln < 1≤l≤j| − | 4k2 | − | 4k2 | |− | | u , whence 4k2 1 u u u u n> (im ) . ˆl | |− 4k2 ≥ 2ˆl − 4k2 ≥ 4k Next, let min jn lm = jn ˜lm, ˜l <i. Then 1≤l<i | − | | − | jn ˜lm = jˆln jim+ jim ˜lm (ji ˜l)m j ˆln im | − | |ˆl − ˆl ˆl − |≥ ˆl − | |− ˆl| − | m k u u u u . ≥| |− 4k2 ≥ 2i − 4k ≥ 4k It follows that sˆ (n,m) u , and so by above either sˆ (m,n) u or j,i ≥ 4k ij ≥ 4k2 sˆ (n,m) u , whence s (n,m) u . The case n u is dealt with in j,i ≥ 4k i,j ≥ 4k2 | | ≥ 2j the same way exchanging i and j as well as m and n in the above argument. Inordertoobtain(4.3)werelyonthedefinition(4.1)togetherwiththeassump- tions (2.9) and (2.11). (ii)By(2.10)thereexistsM suchthat q (m) q(m) m foralll<iprovided i l | − |≥| | m M. Hence, for such m, | |≥ min min q (m) q (n), min q (m) q (m)) sˆ (m,n). i l i l i,j (cid:0)1≤l≤j| − | 1≤l<i| − | ≥ Assume that s (m,n)=sˆ (m,n) 2r and set i,j i,j ≥ b(r)(m,n)=EY (q (m))Y (q (n)) i,j i,r i j,r j

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.