A central limit theorem for Lebesgue integrals of random fields Ju¨rgen Kampf January 5, 2016 6 1 0 Abstract 2 InthispaperweshowacentrallimittheoremforLebesgueintegralsofstationaryBL(θ)-dependent n randomfieldsastheintegrationdomaingrowsinVanHove-sense. Ourmethodistousethe(known) a analogue result for discrete sums. As applications we obtain various multivariate versions of this J central limit theorem. 4 ] 1 Introduction R P Random fields are collections of random variables indexed by the Euclidean space Rd. They have appli- . h cationsinvariousbranchesofscience,e.g.inmedicine[1,13],ingeostatistics[5,15]orinmaterialscience t a [10, 14]. m The aim of the present paper is to establish a central limit theorem for integrals X(t)dt, where Wn [ (Wn)n N isasequenceofcompactsubsetsofRd and(X(t))t Rd isarandomfield. TheRsequence(Wn)n N 1 of inte∈gration domains is assumed to grow in Van Hove-sen∈se (VH-sense), i.e. ∈ v 3 lim λd((bdWn)+Bd)/λd(Wn)=0, n 1 →∞ 5 whereλ denotestheLebesguemeasure,bdW istheboundaryofW Rd,A+B := a+b a A, b B d 0 for two subsets A,B Rd and Bd := x Rd x 1 is the close⊆d Euclidean un{it bal|l. ∈ ∈ } 0 ⊆ { ∈ |k k≤ } Themainresultofthe presentpaperisthe following(the notionofBL(θ)-dependence willbe defined . 1 in Subsection 2.1). 0 6 Theorem 1. Let θ =(θr)r N be a monotonically decreasing zero sequence. Let (X(t))t Rd be a measur- 1 able, stationary, BL(θ)-dep∈endent R-valued random field such that ∈ : v i Cov X(0),X(t) dt< . X ZRd| | ∞ r (cid:0) (cid:1) a Let (Wn)n N be a VH-growing sequence of subsets of Rd. Then ∈ X(t)dt EX(0)λ (W ) Wn − d n (0,σ2), n , R λd(Wn) →N →∞ in distribution, where p σ2 := Cov X(0),X(t) dt. ZRd (cid:0) (cid:1) There is a wide literature on similar results, where mixing conditions are assumed instead of BL(θ)- dependence, see e.g. [4, 7, 8, 9]. For BL(θ)-dependent random fields there are no central limit theorems for Lebesgue integrals up to now. However, there are such results for discrete sums [2] and for Lebesgue measures of excursion sets [3]. In the latter paper the random field is in fact assumed to be quasi- associated, which is a slightly stronger assumption than BL(θ)- dependence. This paper is organized as follows: In Section 2 we collect preliminaries about associated random variables,randomfieldsandfunctionsofboundedvariation. Section3isdevotedtotheproofofthemain theorem. InSection4wepresentseveralexampleshowthemainresultcanbeextendedtoamultivariate centrallimittheorem. Thecasethatthe randomfieldisofthe form(f(X(t)))t Rd forsomedeterministic function f :R Rs and some random R-valued field (X(t))t Rd will be of par∈ticular interest. → ∈ 1 2 PRELIMINARIES 2 2 Preliminaries 2.1 Association concepts In this subsection we introduce different association concepts and discuss their relations. We start with the broadest appearing in this paper, namely BL(θ)-dependence. For finite subsets I,J Rd we put dist(I,J) := min x y : x I, y J , where is the 1 1 ℓ1-norm. For two Lipschitz⊆functions f :Rn1 R and g :{Rkn2− kR we p∈ut ∈ } k·k → → Ψ(n ,n ,f,g)=min n ,n Lip(f)Lip(g), 1 2 1 2 { } where f(x) f(y) Lip(f):=sup | − | x,y Rn, x=y x y | ∈ 6 1 n k − k o denotes the (optimal) Lipschitz constant of a Lipschitz function f :Rn R. Fora randomfield (X(t))t Rd, a finite subsetI = t1,...,tn Rd w→ith n elements anda function f on Rn we abbreviate f(X ):=∈f(X(t ),...,X(t )). I{f such an a}b⊆breviation X appears more than once I 1 n I within one formula, then always the same enumeration of the elements of I has to be used. For a set M let #M denote the number of elements of M. Furthermore, for ∆>0 we put T(∆):= (j /∆,...,j /∆) (j ,...,j ) Zd . 1 d 1 d { | ∈ } Def. 2. Let θ =(θr)r N be a monotonically decreasing sequence with limr θr =0. ∈ →∞ (i) An Rs-valuedrandomfield (X(t))t Rd is called BL(θ)-dependent if for any ∆>1 andany disjoint, finite sets I,J T(∆) with dist(I,∈J) r and all bounded Lipschitz functions f : Rs#I R and · g :Rs#J R ⊆we have ≥ → · → Cov(f(X ),g(X )) Ψ(#I,#J,f,g)∆dθ . I J r ≤ (ii) An Rs-valued random field (X(t))t Zd is called BL(θ)-dependent if for any disjoint, finite sets I,J Zd with dist(I,J) r andall∈boundedLipschitz functions f :Rs#I R andg :Rs#J R · · ⊆ ≥ → → we have Cov(f(X ),g(X )) Ψ(#I,#J,f,g)θ . I J r ≤ Lemma 3. Let θ = (θr)r N be a monotonically decreasing sequence with limr θr = 0. For T = Zd or T =Rd, let (X(n)(t)) ∈,n N, be a sequence of BL(θ)-dependent random fi→e∞lds such that the finite- t T ∈ ∈ dimensionaldistributionsconvergetothoseofafield(X(t)) . Then(X(t)) isalsoBL(θ)-dependent. t T t T ∈ ∈ Proof: Bythedefinitionofconvergenceinprobabilitywegetlim Ef(X(n))g(X(n))=Ef(X )g(X ), n I J I J lim Ef(X(n))=Ef(X ) andlim Eg(X(n))=Eg(X )f→or∞any finite sets I,J T andbounded Lipnsc→h∞itz contiInuous functioIns f :R#In→∞R andJg :R#J RJ, which yields the asserti⊆on. → → Lemma 4. Let θ = (θr)r N be a monotonically decreasing sequence with limr θr = 0. Let (X(t))t T be a BL(θ)-dependent ran∈dom field and let f : Rs Rs′ be a Lipschitz fu→n∞ction. Then there is∈a → monotonically decreasing sequence θ′ = (θr′)r N with limr θr′ = 0 such that (f(X(t)))t T is BL(θ′)- ∈ →∞ ∈ dependent. Proof: For a function f : V W and a finite set I let f denote the function V#I W#I, I → → (x ,...,x ) (f(x ),...,f(x )). We put θ := Lip(f)2 θ . Let I,J be two disjoint finite sets wi1th dist(#I,IJ)7→ r and1 let f˜:Rs#′#II R and g :r′Rs′#J R b·ertwo Lipschitz functions. Then · · ≥ → → Cov(f˜(f (X )),g(f (X ))) min #I,#J Lip(f˜ f ) Lip(g f ) ∆d θ I I J J I J r ≤ { }· ◦ · ◦ · · min #I,#J Lip(f˜) Lip(g) ∆d θ . ≤ { }· · · · r′ An Rs-valued random field (X(t)) is called positively associated (PA) if t T ∈ Cov(f(X ),g(X )) 0 I J ≥ 2 PRELIMINARIES 3 for any finite sets I,J T and functions f : Rs#I R and g : Rs#J R which are bounded and · · ⊆ → → monotonically increasing in every coordinate. For a Lipschitz function f :Rn R we define coordinate-wise Lipschitz constants by → f(x ,...,x ,y ,x ,...,x ) f(x ,...,x ,z ,x ,...,x ) 1 k 1 k k+1 n 1 k 1 k k+1 n Lipk(f)=sup | − y −z − | | k k n | − | x ,...,x ,x ,...,x ,y ,z R, y =z , k 1,...,n . 1 k 1 k+1 n k k k k − ∈ 6 ∈{ } o An Rs-valued random field (X(t)) with E[X (t)2] < ,t T,k = 1,...,s, is called quasi-associated t T k ∈ ∞ ∈ (QA) if s s Cov(f(X ),g(X )) Lip (f) Lip (g) Cov(X (t),X (u)) | I J |≤ t,k · u,l | k l | t I k=1u J l=1 X∈ XX∈ X for any finite sets I,J T and Lipschitz continuous functions f :Rs#I R and g :Rs#J R. · · ⊆ → → ItiswellknownthateveryPArandomfieldisalsoQA,seee.g.Theorem5.3in[2,p.89](thistheorem is only formulated in the special case s=1 and T =Zd, but the proof holds in the present setting). Lemma 5. Let (X(t))t Rd be an Rs-valued QA random field. Assume that there are c > 0 and ǫ > 0 with ∈ Cov(Xi(t1),Xj(t2)) c t1 t2 −d−ǫ ≤ ·k − k∞ fort1,t2 Rd andi,j =1,...,s. Then(X(t))t Rd isBL(θ)-dependentforsomemonotonicallydecreasing zero sequ∈ence θ. ∈ Proof: Let r>0, ∆>1 and let I,J T(∆) be finite with dist(I,J) r, w.l.o.g. #I #J. Moreover, let f :Rs#I R and g :Rs#J R b⊆e bounded and Lipschitz contin≥uous. Then ≤ · · → → s s Cov(f(X ),g(X )) Lip (f) Lip (g)Cov(X (t),X (u)) I J ≤ t,k · u,l k l t I k=1u J l=1 X∈ XX∈ X s2 #I Lip(f) Lip(g) max Cov(X (t),X (u)) k l ≤ · · · · t,k,l u J X∈ s2 #I Lip(f) Lip(g) max c t u d ǫ. − − ≤ · · · · t I ·k − k∞ ∈ u J X∈ We have for fixed t I, if r >1, ∈ t u −d−ǫ ∞ s −d−ǫ # v T(∆) v = s k − k∞ ≤ ∆ · { ∈ |k k∞ ∆} uX∈J s=X⌈r∆⌉(cid:0) (cid:1) = ∞ s −d−ǫ (2s+1)d (2s 1)d ∆ · − − s=X⌈r∆⌉(cid:0) (cid:1) (cid:0) (cid:1) d 1 ∞ − d =∆d+ǫ s d ǫ (1+( 1)d ι 1)(2s)ι − − − − ι − s=X⌈r∆⌉Xι=0 (cid:18) (cid:19) d 1 ∆d+ǫ ∞ − d (1+( 1)d ι 1)2ιs d ǫ+ιds − − − − ≤ ι − Z⌈r∆⌉−1Xι=0(cid:18) (cid:19) ∆d+ǫd−1 d (1+( 1)d−ι−1)2ι(r∆−∆)−d−ǫ+ι+1 ≤ ι − d+ǫ ι 1 ι=0(cid:18) (cid:19) − − X ∆dd−1 d (1+( 1)d−ι−1)2ι(r−1)−d−ǫ+ι+1. ≤ ι − d+ǫ ι 1 ι=0(cid:18) (cid:19) − − X 2 PRELIMINARIES 4 Putting θr :=(c∆3d·ds·2mPadιx=−i=011(cid:0),d.ι.(cid:1).,(s1V+ar((−X1i)(d0−))ι−+1)θ22ι(r−d1+)−ǫ−d−ι−ǫ+1ι+1 ffoorr rr >=11,, we obtain Cov(f(X ),g(X )) min #I,#J Lip(f) Lip(g) ∆dθ . I J r ≤ { }· · · 2.2 Random fields After having introduced the association concepts in subsection 2.1, we will now collect various other preliminaries concerning random fields. The following theorem(see [6, Ch. III, 3]and [12, Prop.3.1])saysthat for stationaryrandomfields § stochastic continuity and measurability are essentially equivalent. Theorem 6. (i) Let(X(t))t Rd beastochasticallycontinuousrandomfield. Thenthereisameasurable modification of (X(t))t Rd∈. ∈ (ii) Let (X(t))t Rd be a stationary and measurable random field. Then (X(t))t Rd is stochastically continuous.∈ ∈ Lemma 7. Let (Xt)t Rd be a stationary, stochastically continuous random field with EX(0)j < for j >0. Then (Xt)t Rd∈is continuous in j-mean. ∞ ∈ Proof: Let (tn)n N be a sequence of points in Rd converging to a point t Rd. Then ∈ ∈ lim E X(t ) X(t)j = lim ∞P(X(t ) X(t)j x)dx= ∞ lim P(X(t ) X(t) √j x)dx=0. n n n n | − | n | − | ≥ n | − |≥ →∞ →∞Z0 Z0 →∞ We have been allowed to interchange limit and integral, since P(X(t ) X(t) √j x) P(X(t ) √jx)+P(X(t) √jx)=2P(X(t) √jx) | n − |≥ ≤ | n |≥ 2 | |≥ 2 | |≥ 2 due to the stationarity and ∞2P(X(t) √jx)dx= ∞2P(2 X(t)j x)dx=2j+1E X(0)j < . | |≥ 2 | · | ≥ | | ∞ Z0 Z0 Lemma 8. Let (X(t))t Rd and (Y(t))t Rd be two stochastically continuous and measurable random fields having the same distrib∈ution. Let A ,∈...,A Rd be bounded Borel sets. Assume that X(t)dt is 1 m ⊆ Ai defined a.s. for i = 1,...,m, i.e. not both the positive part and the negative part of these integrals are R infinite. Then Y(t)dt,..., Y(t)dt are defined a.s. as well and A1 Am R R d X(t)dt,..., X(t)dt = Y(t)dt,..., Y(t)dt . (cid:18)ZA1 ZAm (cid:19) (cid:18)ZA1 ZAm (cid:19) Proof: By the Monotone ConvergenceTheorem, we may assume w.l.o.g. that there is some N N such that X(t) [ N,N] and Y(t) [ N,N] for all t Rd. ∈ ∈ − ∈ − ∈ We define processes (Xn(t))t Rd and (Yn(t))t Rd by putting ∈ ∈ z z z z +1 z z +1 Xn(t ,...,t ):=X 1,..., d , for all t 1, 1 ,...,t d, d , z ,...,z Z. 1 d 1 d 1 d n n ∈ n n ∈ n n ∈ (cid:16) (cid:17) h (cid:17) h (cid:17) We get Xn(t)dt i=1,...,m (1) nZAi (cid:12) o (cid:12) z1 z1+1 zd zd+1 z1 zd (cid:12)= λ A , , X ,..., i=1,...,m d i ∩ n n ×···× n n n n nz1,.X..,zd∈Z (cid:16) (cid:2) (cid:1) (cid:2) (cid:1)(cid:17) (cid:0) (cid:1)(cid:12)(cid:12) o d z1 z1+1 zd zd+1 z1 zd (cid:12) = λ A , , Y ,..., i=1,...,m d i ∩ n n ×···× n n n n nz1,.X..,zd∈Z (cid:16) (cid:2) (cid:1) (cid:2) (cid:1)(cid:17) (cid:0) (cid:1)(cid:12)(cid:12) o = Yn(t)dt i=1,...,m . (cid:12) (2) nZAi (cid:12) o (cid:12) (cid:12) 2 PRELIMINARIES 5 For ǫ,δ >0 we get m m P Xn(t)dt X(t)dt >ǫ P Xn(t) X(t) dt>ǫ − ≤ | − | (cid:16)Xi=1(cid:12)ZAi ZAi (cid:12) (cid:17) (cid:16)Xi=1ZAi (cid:17) (cid:12) (cid:12) E m Xn(t) X(t) dt i=1 Ai| − | ≤ ǫ P R m E Xn(t) X(t) dt = i=1 Ai | − | ǫ P R m δ+P(Xn(t) X(t) >δ) 2N dt i=1 Ai | − | · ≤ ǫ P R (cid:0) (cid:1) m δdt n→∞ i=1 Ai −→ ǫ P R m δ λ (A ) = · i=1 d i . ǫ P The limit relation holds by the MajorizedConvergenceTheorem, since the assumption that (X(t))t R is stochastically continuous implies that Xn(t) converges to X(t). Since δ >0 was arbitrary,we get ∈ m P Xn(t)dt X(t)dt >ǫ n→∞0 − −→ (cid:16)Xi=1(cid:12)ZAi ZAi (cid:12) (cid:17) and the same way (cid:12) (cid:12) m P Yn(t)dt Y(t)dt >ǫ n→∞0. − −→ (cid:16)Xi=1(cid:12)ZAi ZAi (cid:12) (cid:17) Now (2) yields the assertion. (cid:12) (cid:12) 2.3 Functions of bounded variation A function f : R R is said to be of locally bounded variation if there is a monotonically increasing function α:R R→and a monotonically decreasing function β :R R such that f =α+β. We denote → → the set of such functions α and β by A resp. B. We put inf α(x) α A, α(0)=f(0) if x>0 { | ∈ } f+(x):= f(0) if x=0 sup α(x) α A, α(0)=f(0) if x<0. { | ∈ } It is easy to see that f+ A and f− :=f f+ B. We put hf :=f+ f−. ∈ − ∈ − Lemma 9. Let f : R R be a function of locally bounded variation. Then f = g h for a Lipschitz f continuous function g :→R R of Lipschitz constant 1. ◦ → Proof: Foreachx R,forwhichthereist Rwithh (t)=x,defineg(x):=f(t). Nowgiswell-defined, f sincefort ,t Rw∈ithh (t )=h (t ),f is∈constanton[t ,t ]. Clearly,f =g h . Moreover,g -defined 1 2 f 1 f 2 1 2 f onasubsetof∈Rsofar-isLipschitzcontinuouswithLipschitzconstant1. Indeed◦,letx ,x R, x <x , 1 2 1 2 be two points for which there are t ,t R with h (t )=x and h (t )=x . Then ∈ 1 2 f 1 1 f 2 2 ∈ hf(t2) hf(t1)=f+(t2) f+(t1) (f−(t2) f−(t1)) f+(t2) f+(t1)+(f−(t2) f−(t1)) = f(t2) f(t1). − − − − ≥| − − | | − | Hence x x g(x ) g(x ). 2 1 2 1 It rem−ains t≥o |show t−hat g h|as a Lipschitz continuous extension to the whole of R. The domain of g is R minus the union of countable many, disjoint intervals. For a point x lying on the boundary of the domain of g but not in the domain of g, choose a sequence (xn)n N such that g(xn) is defined for all n N. Then (g(xn))n N is a Cauchy sequence, since g is Lipschit∈z continuous, and hence convergent. ∈ ∈ Since (g(xn))n N is convergentfor every such sequence (xn)n N, the limit is independent of the choice of ∈ ∈ thesequence. Sowecanputg(x):=lim g(x ). ItiseasytoseethatthisextensionstillhasLipschitz n n →∞ constant 1. Now all gaps in the domain of g are open intervals. So they can be filled by affine functions. Clearly, the Lipschitz constant is preserved again. 3 THE UNIVARIATE CLT 6 3 The univariate CLT In this section we will prove Theorem 1. Proof: For j = (j ,...,j ) Zd we put Q = d [j ,j +1) and Z(j) := X(t)dt EX(0). We 1 d ∈ j ×i=1 i i Qj − will show that this random field (Z(j))j Zd fulfills the assumptions of TheoreRm 1.12 of [2, p. 178]. The collection ∈ n 1 Zn(j):= nd X(j1+ kn1,...,jd+ knd)−EX(0), j ∈Zd, k1,.X..,kd=1 is BL(θ )-dependent for any n N, where θ := θ . Indeed, let I,J Zd and let f : R#I R and g :R#J′ R beboundedLipsch∈itzfunctions.r′ PutI˜r−=dI+ 1/n,2/n,...,⊆1 d,J˜=J+ 1/n,2/n→,...,1 d, → { } { } nd nd 1 1 f˜:R#Ind R, (x ,...,x ) f x EX(0),..., x EX(0) · → 1,1 #I,nd → nd 1,ℓ− nd #I,ℓ− (cid:16) Xℓ=1 Xℓ=1 (cid:17) and nd nd 1 1 g˜:R#J·nd →R, (x1,1,...,x#J,nd)→g nd x1,ℓ−EX(0),...,nd x#J,ℓ−EX(0) . (cid:16) Xℓ=1 Xℓ=1 (cid:17) Then we have f(Z ) = f˜(X ), g(Z ) = g˜(X ), Lip(f˜) = Lip(f)/nd, Lip(g˜) = Lip(g)/nd and n,I I˜ n,J J˜ dist(I˜,J˜) dist(I,J) d. So ≥ − Cov(f(Z ),g(Z ))=Cov(f˜(X ),g˜(X )) n,I n,J I˜ J˜ min #I nd,#J nd Lip(f˜)Lip(g˜)ndθ r d ≤ { · · } − =min #I,#J Lip(f)Lip(g)θ . { } r′ By Lemma 3, the field (Z(j))j Zd is BL(θ′)-dependent if we can show that the finite-dimensional distributions of (Zn(j))j Zd converg∈e to those of (Z(j))j Zd. First we will show ∈ ∈ lim E Z (j) Z(j) =0, j Zd. (3) n n | − | ∈ →∞ Let ǫ>0. Due to Lemma 7, the field (X(t))t Rd is continuous in 1-mean and hence there is n such that ∈ E X(0) X(t) <ǫ for all t [0, 1]d. | − | ∈ n Since (Xt)t Rd is stationary, this implies ∈ E|X(j1+ kn1,...,jd+ knd)−X(t)|<ǫ for all t∈[j1+ k1n−1,j1+ kn1]×···×[jd+ kdn−1,jd+ knd]. Hence E Z (j) Z(j) <ǫ, which finishes the proof of (3). n Now l|et j(1),−...,j(r|) Zd and let δ >0. From Markov’s inequality we get ∈ r r E Z (j(l)) Z(j(l)) P Z (j(l)) Z(j(l)) >δ l=1 | n − | 0, n . n | − | ≤ δ → →∞ (cid:16)Xl=1 (cid:17) P Sothefinite-dimensionaldistributionsof(Zn(j))j Zd convergetothoseof(Z(j))j Zd andhence(Z(j))j Zd is BL(θ)-dependent. ∈ ∈ ∈ By Lemma 8, the assumption that (X(t))t Rd is stationary implies that (Z(j))j Zd is stationary. Moreover,(Z(j))j Zd is centered, since ∈ ∈ ∈ EZ(0)=E X(t)dt EX(0)= EX(t)dt EX(0)=0. − − Z[0,1)d Z[0,1)d 4 THE MULTIVARIATE CLT 7 Further, Cov Z(0),Z(j) = Cov X(s),X(t) dtds jX∈Zd (cid:0) (cid:1) jX∈ZdZ[0,1)dZj+[0,1)d (cid:0) (cid:1) = Cov X(0),X(t s) dtds Z[0,1)dZRd − (cid:0) (cid:1) = Cov X(0),X(t) dtds Z[0,1)dZRd (cid:0) (cid:1) = Cov X(0),X(t) dt. ZRd (cid:0) (cid:1) We put Q := j Zd j +[0,1)d W and W := j +[0,1)d . As explained in the proof of [3, Thneorem{ 1∈.2], th|e assumption⊆thatn}(Wn)n Nn−is VH-gjr∈oQwning implies that (Qn)n N is regular ∈ S (cid:0) (cid:1) ∈ growing. Now Theorem 1.12 of [2, p. 178] implies that Wn−X(t)dt−λd(Wn−)EX(0) = j∈QnZ(j) (0,σ2), n . √#Q →N →∞ R λd(Wn−) P n q If we can show that Wn\Wn−X(t)dt−λd(Wn\Wn−)EX(0) P 0, n , (4) R λd(Wn) → →∞ p then Slutzki’s theorem will imply the assertion, since, clearly, λd(Wn−)/ λd(Wn) 1. We get → q p Var X(t)dt = Cov(X(s),X(t))dtds (cid:16)ZWn\Wn− (cid:17) ZWn\Wn−ZWn\Wn− Cov(X(s),X(t)) dtds ≤ZWn\Wn−ZRd| | =λ (W W ) Cov(X(0),X(t)) dt. d n\ n− ZRd| | Since (Wn)n N is VH-growing,we get ∈ Var Wn\Wn−X(t)dt = Var Wn\Wn−X(t)dt 0, n . (cid:18)R λd(Wn) (cid:19) (cid:0)R λd(Wn) (cid:1) → →∞ By the Chebyshev inequalitypthis implies (4). 4 The multivariate CLT In this section we extend Theorem 1 in various ways to multivariate central limit theorems. Theorem 10. Let θ = (θr)r N be a monotonically decreasing zero sequence. Let (X(t))t Rd be an Rs- valued random field. Assume∈that (X(t))t Rd is stationary, measurable, BL(θ)-dependent a∈nd fulfills ∈ Cov(X (0),X (t)) dt< , i,j =1,...,s. i j ZRd| | ∞ Let (Wn)n N be a VH-growing sequence of subsets of Rd. Then ∈ X (t)dt EX (0)λ (W ) X (t)dt EX (0)λ (W ) Wn 1 − 1 d n ,..., Wn s − s d n (0,Σ), n , R λd(Wn) R λd(Wn) →N →∞ (cid:16) (cid:17) in distribution, whpere Σ is the matrix with entries p Cov(X (0),X (t))dt, i,j =1,...,s. i j ZRd 4 THE MULTIVARIATE CLT 8 Proof: Let u = (u1,...,us) Rs. Then ( X(t),u )t Rd is BL(θ′)-dependent for a monotonically sdteactrieoansainrygasneqdumenecaesuθr′ab=le(.θWr′)re∈∈hNavweith limr→∞hθr′ = 0idu∈e to Lemma 4. Obviously, (hX(t),ui)t∈Rd is s s Cov( X(0),u , X(t),u )dt= u u Cov(X (0),X (t))dt=uTΣu. i j i j ZRd h i h i i=1j=1 ZRd XX In particular, the integral is defined. So Theorem 1 implies X (t)dt EX (0)λ (W ) X (t)dt EX (0)λ (W ) Wn 1 − 1 d n ,..., Wn s − s d n ,u R λd(Wn) R λd(Wn) D(cid:16) (cid:17) E X(t),u dt E X(0),u λ (W ) = Wnh p i − h i d n (p0,uTΣu), n . R λd(Wn) →N →∞ Since Y,u (0,uTΣu) for a rapndom vector Y (0,Σ), the Theorem of Cram´er and Wold implies h i∼N ∼N the assertion. Corollary 11. Let (X(t))t Rd be a stationary, measurable R-valued random field and let f1,...,fs : R R be functions. Let (W∈n)n N be a VH-growing sequence of subsets of Rd. Assume that one of the → ∈ following conditions holds: (i) The field (X(t))t Rd is BL(θ)-dependent for a monotonically decreasing zero sequence θ =(θr)r N, the maps f ,...,∈f are Lipschitz continuous and ∈ 1 s Cov f (X(0)),f (X(t)) dt< , i,j =1,...,s. i j ZRd ∞ (cid:12) (cid:0) (cid:1)(cid:12) (cid:12) (cid:12) (ii) The field (X(t))t Rd is QA and there are c>0 and ǫ>0 with ∈ Cov(X(0),X(t)) c t d ǫ, t Rd. (5) − − ≤ ·k k∞ ∈ The maps f ,...,f are Lipschitz continuous. 1 s (iii) The field (X(t))t Rd is PA with EX(0)2 < . The maps f1,...,fs are of locally bounded variation with E[h (X(0))∈2]< , i=1,...,s, and t∞here are c>0 and ǫ>0 with fi ∞ Cov h (X(0)),h (X(t)) c t d ǫ, t Rd, i,j =1,...,s. (6) fi fj ≤ ·k k∞− − ∈ (cid:0) (cid:1) Then f (X(t))dt Ef (X(0))λ (W ) f (X(t))dt Ef (X(0))λ (W ) Wn 1 − 1 d n ,..., Wn s − s d n (0,Σ), R λd(Wn) R λd(Wn) →N (cid:16) (cid:17) as n in distribuption, where Σ is the matrix with entries p →∞ Cov f (X(0)),f (X(t)) dt, i,j =1,...,s. i j ZRd (cid:0) (cid:1) Part (i) of this corollary is an immediate consequence of Lemma 4 and Theorem 10. Proof of Corollary 11(ii): The field (X(t))t Rd is BL(θ)-dependent by Lemma 5 and thus Lemma 4 implies that the field (f1(X(t)),...,fs(X(t)))t R∈d is also BL(θ)-dependent. In order to check the integrability assumpt∈ions from part (i), we put N if f (x)< N j − − f(N) :x f (x) if f (x) [ N,N] j 7→ j j ∈ − N if fj(x)>N. REFERENCES 9 Since (X(t))t Rd is QA, we get ∈ Cov f(N)(X(0)),f(N)(X(t)) Lip(f(N)) Lip(f(N)) Cov(X(0),X(t)) | i j |≤ i · j ·| | (cid:0) (cid:1) Lip(fi) Lip(fj) Cov(X(0),X(t)). ≤ · ·| | By the Monotone Convergence Theorem, applied to both summands of E[f(N)(X(0))f(N)(X(t))] i j − E[f(N)(X(0))] E[f(N)(X(t))], this yields i · j Cov f (X(0)),f (X(t)) Lip(f ) Lip(f ) Cov(X(0),X(t)). i j i j | |≤ · ·| | Moreover,(5) implies (cid:0) (cid:1) Cov X(0),X(t) dt< ZRd ∞ (cid:12) (cid:0) (cid:1)(cid:12) and hence (cid:12) (cid:12) Cov f (X(0)),f (X(t)) dt< , i,j =1,...,s. i j ZRd ∞ (cid:12) (cid:0) (cid:1)(cid:12) So part (i) yields the assertio(cid:12)n. (cid:12) Proof of Corollary 11(iii): Since (X(t))t Rd is PA, the random field (hf1(X(t)),...,hfs(X(t)))t Rd is also PA, see Theorem 1.8(d) of [2, p. 7], a∈nd therefore QA. By Lemma 5 it is BL(θ)-dependent∈for somemonotonicallydecreasingzerosequenceθ. Hence (f1(X(t)),...,fs(X(t)))t Rd isBL(θ′)-dependent for some monotonically decreasing zero sequence θ by Lemma 9 and Lemma 4.∈ ′ Clearly, the field (f1(X(t)),...,fs(X(t)))t Rd is also stationary and measurable. Moreover,(6) implies ∈ Cov h (X(0)),h (X(t)) dt< , i,j =1,...,s. ZRd fi fj ∞ (cid:12) (cid:0) (cid:1)(cid:12) (cid:12) (cid:12) Now Lemma 9 and the QA property of (hf1(X(t)),...,hfs(X(t)))t Rd give ∈ Cov f (X(0)),f (X(t)) dt Cov h (X(0)),h (X(t)) dt< , i,j =1,...,s. ZRd i j ≤ZRd fi fj ∞ (cid:12) (cid:0) (cid:1)(cid:12) (cid:12) (cid:0) (cid:1)(cid:12) So Th(cid:12)eorem 10 yields the assert(cid:12)ion. (cid:12) (cid:12) References [1] R.J. Adler, J.E. Taylor, K.J. Worsley: Applications of Random Fields and Geometry: Founda- tions and case studies. Springer Series in Statistics. New York: Springer. In preparation, 2009. http://webee.technion.ac.il/people/adler/hrf.pdf [2] A.Bulinski,A.Shashkin: LimitTheorems forAssociatedRandomFieldsandRelatedSystems.World Scientific Publishing, New Jersey, 2007. [3] A. Bulinski, E. Spodarev, F. Timmermann: Central limit theorems for the excursion sets volumes of weakly dependent random fields. Bernoulli 18, 2012, 100–118. [4] A. Bulinski, Z. Zhurbenko: A central limit theorem for additive random functions. Theory of Prob- ability and its Applications 21, 1976, 687– 697. [5] J. Chil´es, P. Delfiner: Geostatistics: Modeling Spatial Uncertainty. Wiley, New York, 2007. [6] I. Gikhman, A. Skorokhod: The Theory of Stochastic Processes I, Springer, 2004. [7] V.V.Gorodetskii: Thecentrallimittheoremandaninvarianceprincipleforweaklydependentrandom fields. Soviet Mathematics. Doklady 29(3), 1984,529–532. REFERENCES 10 [8] A.V. Ivanov, N.N. Leonenko: Statistical Analysis of Random Fields. Kluwer, Dordrecht, 1989. [9] N.N. Leonenko: The central limit theorem for homogeneous random fields, and the asymptotic nor- mality of estimators of the regression coefficients. Dopov¯ıd¯ıAkadem¯ı¨ıNauk Ukra¨ınsko¨ıRSR. Seriya A, 1974,699–702. [10] K.Mecke,D.Stoyan: Morphology ofCondensedMatter-Physics andGeometryofSpatiallyComplex Systems. Springer, 2002. [11] C.N.Morris: Naturalexponentialfamilies withquadraticvariance functions.TheAnnalsofStatistics 10, 1982, 65–80. [12] P. Roy: Nonsingular group actions and stationary SαS random fields. Proceedings of the American Mathematical Society 138, 2010, 2195–2202. [13] J.E. Taylor, K.J. Worsley: Detecting sparse signal in random fields, with an application to brain mapping. Journal of the American Statistical Association 102 (479), 2007, 913–928. [14] S. Torquato: Random Heterogeneous Materials - Microstructure and Macroscopic Properties. Springer, 2001. [15] H. Wackernagel: Multivariate Geostatistics: An Introduction with Applications. Springer, 2003.