ebook img

Expected number of real roots of random trigonometric polynomials PDF

0.19 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Expected number of real roots of random trigonometric polynomials

EXPECTED NUMBER OF REAL ROOTS OF RANDOM TRIGONOMETRIC POLYNOMIALS HENDRIKFLASCHE Abstract. We investigate the asymptotics of the expected number of real roots of random trigonometric polynomials 6 1 1 n 0 Xn(t)=u+ √n (Akcos(kt)+Bksin(kt)), t∈[0,2π], u∈R 2 k=1 X whosecoefficientsA ,B ,k N,areindependentidenticallydistributed n k k ∈ a random variables with zero mean and unitvariance. If Nn[a,b] denotes J the number of real roots of Xn in an interval [a,b] [0,2π], we prove ⊆ 8 that lim ENn[a,b] = b−aexp u2 . ] n→∞ n π√3 − 2 R (cid:18) (cid:19) P . h at 1. Introduction m 1.1. Main result. In this paper we are interested in the number of real [ roots of a random trigonometric polynomial Xn :[0,2π] → R defined as 1 n v 1 (1) X (t) := u+ (A cos(kt)+B sin(kt)), 1 n √n k k 4 k=1 X 18 where n ∈ N, u∈ R, and the coefficients (Ak)k∈N and (Bk)k∈N are indepen- dent identically distributed random variables with 0 1. (2) EA = EB = 0, E[A2]= E[B2]= 1. k k k k 0 6 The random variable which counts the number of real roots of Xn in an 1 interval [a,b] [0,2π] is denoted by N [a,b]. By convention, the roots are n : ⊆ v counted with multiplicities and a root at a or b is counted with weight 1/2. i The main result of this paper is as follows. X r Theorem 1. Under assumption (2) and for arbitrary 0 6 a < b 6 2π, the a expected number of real roots of X satisfies n EN [a,b] b a u2 n (3) lim = − exp . n n π√3 − 2 ! →∞ The number of real roots of random trigonometric polynomials has been muchstudiedinthecasewhenthecoefficientsA ,B areGaussian;see[Dun66], k k [Das68], [Qua70], [Wil91], [Far90], [Sam78], to mention only few references, and the books [Far98], [BRS86], where further references can be found. In particular, a proof of (3) in the Gaussian case can be found in [Dun66]. Recently, a central limit theorem for the number of real roots was obtained in [GW11] and then, by a different method employing Wiener chaos ex- pansions, in [AL13]. For random trigonometric polynomials involving only 1 2 HENDRIKFLASCHE cosines, the asymptotics for the variance (again, only in the Gaussian case) was obtained in [SS12]. All references mentioned above rely heavily on the Gaussian assumption which allows for explicit computations. Much less is known when the coef- ficients are non-Gaussian. In the case when the coefficients are uniform on [ 1,1] and there are no terms involving the sine, an analogue of (3) was − obtained in [Sam76]. The case when the third moment of the coefficients is finite, has been studied in [ST83]. After the main part of this work was completed, we became aware of the work of Jamrom [Jam72] and a recent paperbyAngstandPoly[AP]. AngstandPoly[AP]proved(3)(withu= 0) assuming that the coefficients A and B have finite 5-th moment and sat- k k isfy certain Cramér-type condition. Although this condition is satisfied by some discrete probability distributions, it excludes the very natural case of 1-valued Bernoulli random variables. Another recent work by Azaïs et. ± al. [ADJ+] studies the local distribution of zeros of random trigonometric polynomials and also involves conditions stronger than just the existence of thevariance. InthepaperofJamrom[Jam72], Theorem1(andevenitsgen- eralization to coefficients from an α-stable domain of attraction) is stated without proof. Since full details of Jamrom’s proof do not seem to be avail- ableandsincetherewereatleast threeworksfollowing [Jam72]inwhichthe result was established under more restrictive conditions (namely, [Sam76], [ST83], [AP]), it seems of interest to provide a full proof of Theorem 1. 1.2. Method of proof. The proof uses ideas introduced by Ibragimov and Maslova[IM71](seealsothepaperbyErdösandOfford[EO56])whostudied the expected number of real zeros of a random algebraic polynomial of the form n Q (t) := A tk. n k k=1 X Foraninterval[a,b] [0,2π]andn Nweintroducetherandomvariable ⊂ ∈ N [a,b] which is the indicator of a sign change of X on the endpoints of n∗ n [a,b] and is more precisely defined as follows: 0 if X (a)X (b) > 0, n n 1 1 (4) Nn∗[a,b] := 2 − 2sgn(Xn(a)Xn(b)) = 11/2 iiff XXn((aa))XXn((bb)) <= 00., n n The proof of Theorem 1 consists of two mainsteps. Step 1: Reduce the study of roots to the study of sign changes. Intuition tells us that N [α,β] and N [α,β] should not differ much if the interval n n∗ [α,β] becomes small. More concretely, one expects that the number of real zeros of X on [0,2π] should be of order n, hence the distance between n consecutive roots should be of order 1/n. This suggests that on an interval [α,β] of length δn 1 (with small δ > 0) the event of having at least two − roots (or a root with multiplicity at least 2) should be very unprobable. The corresponding estimate will be given in Lemma 2. For this reason, it seems plausible that on intervals of length δn 1 the events “there is at least − one root”, “there is exactly one root” and “there is a sign change” should almost coincide. A precise statement will be given in Lemma 5. This part REAL ROOTS OF RANDOM TRIGONOMETRIC POLYNOMIALS 3 of the proof relies heavily on the techniques introduced by Ibragimov and Maslova [IM71] in the case of algebraic polynomials. Step 2: Count sign changes. We compute the limit of EN [α ,β ] on an in- n∗ n n terval[α ,β ]oflengthδn 1. Thisisdonebyestablishingabivariatecentral n n − limit theorem stating that as n the random vector (X (α ),X (β )) n n n n → ∞ convergesindistributiontoaGaussianrandomvectorwithmean(u,u),unit variance, and covariance δ 1sinδ. From this we conclude that EN [α ,β ] − n∗ n n converges to the probability of a sign change of this Gaussian vector. Ap- proximating the interval [a,b] by a lattice with mesh size δn 1 and passing − to the limits n and then δ 0 completes the proof. This part of the → ∞ ↓ proof is much simpler than the corresponding argument of Ibragimov and Maslova [IM71]. Notation. The common characteristic function of the random variables (Ak)k N and (Bk)k N is denoted by ∈ ∈ ϕ(t) := Eexp(itA ), t R. 1 ∈ Due to the assumptions on the coefficients in (1), we can write t2 (5) ϕ(t) = exp H(t) −2 ! for sufficiently small t , where H is a continuous function with H(0) = 1. | | In what follows, C denotes a generic positive constant which may change from line to line. 2. Estimate for EN [a,b] EN [a,b] on small intervals n − n∗ In this section we investigate the expected difference between N [α,β] n and N [α,β] on small intervals [α,β] of length n 1δ, where δ > 0 is fixed. n∗ − 2.1. Expectation and variance. The following lemma will be frequently needed. Lemma 1. For j N let X(j)(t) denote the jth derivative of X (t). The 0 n n ∈ (j) expectation and the variance of X are given by n n u, j = 0, 1 EX(j)(t) = VX(j)(t) = k2j. n (0, j N, n n ∈ kX=1 Proof. The jth derivative of X reads as follows: n X(j)(t) u n − 1j=0 1 n dj dj = A cos(kt)+B sin(kt) √n kdtj kdtj ! k=1 X 1 n ( 1)j/2A cos(kt)+( 1)j/2B sin(kt), if j is even, = kj − j−1 k − j−1 k √n kX=1 ((−1) 2 Aksin(kt)+(−1) 2 Bkcos(kt), if j is odd. Recalling that (Ak)k N and (Bk)k N have zero mean and unit variance we immediately obtain t∈he required fo∈rmula. (cid:3) 4 HENDRIKFLASCHE 2.2. Estimate for the probability that X(j) has many roots. Given n (j) (j) any interval [α,β] [0,2π], denote by D = D (n;α,β) the event that m m ⊂ the jth derivative of X (t) has at least m roots in [α,β] (the roots are n counted with their multiplicities and the roots on the boundary are counted without the weight 1/2). Here, j N and m N. A key element in our 0 ∈ ∈ proofs is an estimate for the probability of this event presented in the next lemma. Lemma 2. Fix j N and m N. For δ > 0 and n N let [α,β] [0,2π] 0 ∈ ∈ ∈ ⊂ be any interval of length β α = n 1δ. Then, − − P D(j) 6 C(δ(2/3)m +δ (1/3)mn (2j+1)/4), m − − (cid:16) (cid:17) where C = C(j,m) > 0 is a constant independent of n, δ, α, β. Proof. For arbitrary T > 0 we may write (j) (j) X (β) X (β) P D(j) 6 P D(j) n > T +P n <T . m m ∩((cid:12) nj (cid:12) )! (cid:12) nj (cid:12) ! (cid:16) (cid:17) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) The terms on the right-hand(cid:12) side wi(cid:12)ll be estimated(cid:12) in Lem(cid:12)mas 3 and 4 (cid:12) (cid:12) (cid:12) (cid:12) below. Using these lemmas, we obtain nm(β α)m 2 P Dm(j) 6 C T −m! +C T +T−1/2n−(2j+1)/4 . (cid:16) (cid:17) (cid:20) (cid:21) (cid:16) (cid:17) Setting T = δ(2/3)m yields the statement. (cid:3) Lemma 3. For all j N , m N there exists a constant C = C(j,m) > 0 0 ∈ ∈ such that the estimate X(j)(β) nm(β α)m 2 P D(j) n > T 6 C − m ∩((cid:12)(cid:12) nj (cid:12)(cid:12) )! (cid:20) T m! (cid:21) holds for all T > 0, n (cid:12)(cid:12)N and al(cid:12)(cid:12)l intervals [α,β] [0,2π]. ∈(cid:12) (cid:12) ⊆ Proof. By Rolle’s theorem, on the event D(j) we can find (random) t > m 0 ... > t in the interval [α,β] such that m 1 − X(j+l)(t )= 0 for all l 0,...,m 1 . n l ∈ { − } Thus we may consider the random variable β x1 xm−1 Y(j) := ... X(j+m)(x ) dx ...dx . n 1Dm(j) × n m m 1 Z Z Z t0 t1 tm−1 (j) (j) (j) On the event D , the random variables X (β) and Y are equal. On m n n (j) (j) the complement of D , Y = 0. Hence, it follows that m n (j) (j) X (β) Y P D(j) | n | > T 6 P | n | > T . m ∩( nj )! nj ! Markov’s inequality yields 2 1 β x1 xm−1 P Y(j) > Tnj 6 E ... X(j+m)(x ) dx ...dx . (cid:16)| n | (cid:17) T2n2j (cid:12)(cid:12)Zt0 Zt1 Ztm−1 n m m 1(cid:12)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) REAL ROOTS OF RANDOM TRIGONOMETRIC POLYNOMIALS 5 Using Hölder’s inequality we may proceed as follows 1 (β α)m β x1 xm−1 P Y(j) > Tnj 6 − E ... X(j+m)(x )2 dx ...dx | n | T2n2j m! | n m | m 1 (cid:16) (cid:17) Zt0 Zt1 Ztm−1 1 (β α)m 2 6 − sup E X(j+m)(x)2. T2n2j m! | n | (cid:20) (cid:21) x [α,β] ∈ It remains to find a suitable estimate for sup E X(j+m)(x)2. From Lemma 1 it follows that x∈[α,β] | n | n 1 E X(m+j)(x)2 = VX(j+m)(x) = k2(j+m) 6 C(j,m)n2(j+m) | n | n n k=1 X holds, whence the statement follows immediately. (cid:3) Lemma 4. Fix j N . There exists a constant C = C(j) > 0 such that for 0 all n N, T > 0, ∈β [0,2π], ∈ ∈ (j) X (β) (6) P n 6 T 6 C T +T 1/2n (2j+1)/4 . − − (cid:12) nj (cid:12) ! (cid:12) (cid:12) (cid:16) (cid:17) (cid:12) (cid:12) (cid:12) (cid:12) Proof. For λ > 0(cid:12)let η be(cid:12)a random variable (independent of X(j)(β)) with n characteristic function sin2(tλ) ψ(t) := E[exp(itη)]= . t2λ2 That is, η is the sum of two independent random variables which are uni- formly distributed on [ λ,λ]. Consider the random variable − X˜(j)(β) := n jX(j)(β)+η. n − n For all T > 0 we have (j) X (β) 3 1 (7) P n 6 T 6 P X˜(j)(β) 6 T +P η > T (cid:12)(cid:12) nj (cid:12)(cid:12) ! (cid:18)| n | 2 (cid:19) (cid:18)| | 2 (cid:19) (cid:12) (cid:12) (cid:12) (cid:12) and we estim(cid:12) ate the(cid:12)terms on the right-hand side separately. First term on the RHS of (7). The density of X˜(j)(β) exists and can be n expressed using the inverse Fourier transform of its characteristic function denoted in the following by ϕ˜ (t) := Eexp itX˜(j)(β) . n n (cid:16) (cid:17) (j) Using the representation for X (β) obtained in the proof of Lemma 1 and n recalling that ϕ is the characteristic function of A and B , we obtain k k n tcos(kβ) tsin(kβ) ϕ˜ (t) = ψ(t) ϕ kj ϕ kj . | n | nj+1/2 nj+1/2 k=1(cid:12) (cid:18) (cid:19)(cid:12)(cid:12) (cid:18) (cid:19)(cid:12) Y (cid:12) (cid:12)(cid:12) (cid:12) (cid:12) (cid:12)(cid:12) (cid:12) (cid:12) (cid:12)(cid:12) (cid:12) 6 HENDRIKFLASCHE Using Fourier inversion, for every y > 0 we may write 2 ∞sin(yt) P X˜(j)(β) 6 y = Reϕ˜ (t) dt | n | π t n (cid:16) (cid:17) Z0 2y ∞ n tcos(kβ) tsin(kβ) 6 ψ(t) ϕ kj ϕ kj dt. π nj+1/2 nj+1/2 Z0 kY=1(cid:12)(cid:12) (cid:18) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) (cid:18) (cid:19)(cid:12)(cid:12) (cid:12) (cid:12)(cid:12) (cid:12) We used that t 1sin(yt) 6 y for (cid:12)every y > 0 and (cid:12)t(cid:12)= 0. The coeffic(cid:12)ients − | | 6 A and B are supposed to have zero mean and unit variance. From this we k k can conclude that (8) ϕ(t) 6 exp( t2/4) for t [ c,c], | | − ∈ − where c > 0 is a sufficiently small constant. Let Γ : l = 0,...,n be a l disjoint partition of R defined by { } + cnj+1/2 cnj+1/2 Γ := t : 6t < for l = 1,...,n 1, l ( (l+1)j lj ) − Γ := t :0 6 t < c√n , n Γ := (cid:8)t :t > cnj+1/2 .(cid:9) 0 { } We decompose the integral above as follows: n 2y P X˜(j)(β) 6 y 6 I , | n | π l (cid:16) (cid:17) Xl=0 where n tcos(kβ) tsin(kβ) I := ψ(t) ϕ kj ϕ kj dt. l nj+1/2 nj+1/2 ZΓl k=1(cid:12) (cid:18) (cid:19)(cid:12)(cid:12) (cid:18) (cid:19)(cid:12) Y (cid:12) (cid:12)(cid:12) (cid:12) (cid:12) (cid:12)(cid:12) (cid:12) For the integral over Γ we(cid:12) may write using(cid:12)(cid:12)ϕ(t) 6 1 and sin(cid:12)2(λt) 6 1, 0 | | ∞ ∞ sin2(λt) 1 I 6 ψ(t) dt= dt 6 n (j+1/2). 0 λ2t2 cλ2 − Z Z cnj+1/2 cnj+1/2 The integral over Γ is smaller than a positive constant C > 0 independent n ofnbecausewecanestimatealltermsinvolvingϕbymeansof (8)asfollows: c√n 1 t2 n ∞ I 6 ψ(t)exp k2j dt6 exp t2γ dt 6 C, n −4n2j+1 ! − Z0 kX=1 Z0 (cid:16) (cid:17) where γ > 0 is a small constant and we used that n n2j+1 k2j as n . ∼ 2j +1 → ∞ k=1 X For t Γ with l = 1,...,n 1 we have l ∈ − tcos(lβ) tlj tsin(lβ) tlj lj 6 6 c, lj 6 6 c. nj+1/2 nj+1/2 nj+1/2 nj+1/2 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) REAL ROOTS OF RANDOM TRIGONOMETRIC POLYNOMIALS 7 Thus, we can estimate all factors with k = 1,...,l using (8), whereas for all other factors we use the trivial estimate ϕ(t) 6 1: | | 1 t2 l I 6 ψ(t)exp k2j dt l ZΓl −4n2j+1 k=1 ! X cnj+1/2 1 l 2j+1 6 lj exp γ t2 dt Zcn(lj++11)/j2 λ2t2 − 1 (cid:18)n(cid:19) ! 1 l j+1/2 c√l 1 = exp γ u2 du λ2 (cid:18)n(cid:19) Zcl(jl++11/)j2 u2 (cid:16)− 1 (cid:17) C l j+1/2 6 exp( γ l), λ2 n − 2 (cid:18) (cid:19) where γ ,γ > 0 are small constants and we substituted u2 = t2(l/n)2j+1. 1 2 Summing up yields n 1 n 1 − I 6 Cλ 2n (j+1/2) − lj+1/2exp( γ l)6 C λ 2n (j+1/2). l − − 2 ′ − − − l=1 l=1 X X Taking the estimates for I ,...,I together, for every y > 0 we obtain 0 n 1 (9) P X˜(j)(β) 6 y 6 Cy n (j+1/2)+1 . | n | λ2 − (cid:16) (cid:17) (cid:18) (cid:19) Second term on the RHS of (7). The second term on the right hand-side of (7) can be estimated using Chebyshev’s inequality (and Eη = 0). Namely, for every z > 0, Vη 2λ2 (10) P(η > z) 6 = . | | z2 3z2 Proof of (6). We arrive at the final estimate setting y = 3T/2 and z = T/2 in (9) and (10) respectively. We obtain that for every λ > 0 and T > 0 the inequality X(j)(β) T λ2 P n 6 T 6 C n (j+1/2)+T + − (cid:12) nj (cid:12) ! λ2 T2! (cid:12) (cid:12) (cid:12) (cid:12) holds for a posit(cid:12)ive const(cid:12)ant C = C(j) > 0. This bound can be optimized (cid:12) (cid:12) by choosing a suitable λ > 0. Setting λ = T3/4n (j/4+1/8) the statement of − the lemma follows. (cid:3) 2.3. Roots and sign changes. The next lemma contains the main result of this section. Lemma 5. For every δ (0,1/2) there exists n = n (δ) N such that for 0 0 ∈ ∈ all n > n and every interval [α,β] [0,2π] of length β α = δn 1 we have 0 − ⊂ − the estimate 0 6 EN [α,β] EN [α,β] 6 C(δ4/3 +δ 7n 1/4), n − n∗ − − where C > 0 is a constant independent of n, δ, α, β. A crucial feature of this estimate is that the exponent 4/3 of δ is > 1, while the exponent of n is negative. 8 HENDRIKFLASCHE Proof. Let D(j) bethe random event definedas in Section 2.2. Observethat m due to the convention in which way N [α,β] counts the roots, the difference n between N [α,β] and N [α,β] vanishes in the following cases: n∗ n X has no roots in [α,β] (in which case N [α,β] = N [α,β] = 0); • n n n∗ X has exactly one simple root in (α,β) and no roots on the bound- n • ary (in which case N [α,β] = N [α,β] = 1); n n∗ X has no roots in (α,β) and one simple root (counted as 1/2) at n • either α or β (in which case N [α,β] = N [α,β] = 1/2). n n∗ (0) In all other cases (namely, on the event D when the number of roots in 2 [α,β], with multiplicities, but without 1/2-weights on the boundary, is at least 2) we only have the trivial estimate 0 6 N [α,β] N [α,β] 6 N [α,β]. n − n∗ n SinceD(0) D(0) ...andontheeventD(0) D(0) itholdsthatN [α,β] 6 2 ⊇ 3 ⊇ m \ m+1 n m, we obtain 06 EN [α,β] EN [α,β] 6 E N [α,β] n − n∗ n 1D(0) (cid:20) 2 (cid:21) 2n 6 P D(0) + P D(0) 2 m (cid:16) (cid:17) mX=2 (cid:16) (cid:17) 21 2n 20 6 P D(0) + P D(0) + − P D(20) , 2 m m (cid:16) (cid:17) mX=2 (cid:16) (cid:17) mX=2 (cid:16) (cid:17) where in the last step we passed to the 20-th derivative of X using Rolle’s n theorem. The upper bounds for the first two terms on the right-hand side follow immediately by Lemma 2, namely 21 P D(0) + P D(0) 6 C(δ4/3 +δ 7n 1/4). 2 m − − (cid:16) (cid:17) mX=2 (cid:16) (cid:17) Thus we focus on the last term. For every δ > 0 (and n big enough) we can find a number k = k (δ,n) 2,...,2n such that 0 0 ∈ { } n2 6 δ k0/3 < δ 2k0/3 6 n5. − − (20) For m = 2,...,k the estimate for the probability of D presented in 0 m Lemma 2 is good enough, whereas for m = k +1,...,2n 20 we use the 0 − fact that D(20) D(20) for all l N. This yields k0 ⊇ k0+l ∈ 2n 20 k0 2n 20 − P D(20) 6 P D(20) + − P D(20) m m k0 mX=2 (cid:16) (cid:17) mX=2 (cid:16) (cid:17) m=Xk0+1 (cid:16) (cid:17) k0 6 C(δ2m/3 +δ m/3n 10)+2Cn(δ2k0/3+δ k0/3n 10) − − − − m=2 X 6 C(δ4/3 +n 5)+2Cn(n 2+n 5) − − − 6 C(δ4/3 +δ 7n 1/4). − − Combining the above estimates yields the statement of the lemma. (cid:3) REAL ROOTS OF RANDOM TRIGONOMETRIC POLYNOMIALS 9 3. The related stationary Gaussian process 3.1. Convergence to the Gaussian case. In the following let (Z(t))t R denote the stationary Gaussian process with EZ(t) = u, VZ(t) = 1, an∈d covariance sin(t s) Cov[Z(t),Z(s)] = − , t = s. t s 6 − The following lemma states the weak convergence of the bivariate distribu- tion of (X (α),X (β)) with β α = n 1δ to (Z(0),Z(δ)), as n . n n − − → ∞ Lemma 6. Let δ > 0 be arbitrary but fixed. For n N let [α ,β ] [0,2π] n n ∈ ⊆ be an interval of length β α = n 1δ. Then n n − − X (α ) Z(0) n n in distribution as n . Xn(βn) → Z(δ) → ∞ (cid:18) (cid:19) (cid:18) (cid:19) Proof. To prove the statement it suffices to show the pointwise convergence of the corresponding characteristic functions. Let ϕ (λ,µ) := Eei(λXn(αn)+µXn(βn)) n denote the characteristic function of (X (α ),X (β )). Recall that ϕ rep- n n n n resents the common characteristic function of the coefficients (Ak)k N and ∈ (Bk)k N. Then the expression reads ∈ ϕ (λ,µ) = n n λcos(kα )+µcos(kβ ) λsin(kα )+µsin(kβ ) eiu(λ+µ) ϕ n n ϕ n n . √n √n k=1 (cid:18) (cid:19) (cid:18) (cid:19) Y Using (5) we have ϕ (λ,µ) = e Sn(λ,µ), n − n 1 S (λ,µ) := iu(λ+µ)+ (λcos(kα )+µcos(kβ ))2H (n,k) n n n 1 − 2n k=1 X n 1 + (λsin(kα )+µsin(kβ ))2H (n,k), n n 2 2n k=1 X where we have shortened the writing by defining λcos(kα )+µcos(kβ ) n n H (n,k) := H , 1 √n (cid:18) (cid:19) λsin(kα )+µsin(kβ ) n n H (n,k) := H . 2 √n (cid:18) (cid:19) After elementary transformations and using that β α = n 1δ we obtain n n − − S (λ,µ) = n 1 n λ2 µ2 δ iu(λ+µ)+ H (n,k) + +λµcos k +R (λ,µ), 1 n − n k=1 2 2 (cid:18) n(cid:19)! X where we have abbreviated n 1 R (λ,µ) := (λsin(kα )+µsin(kβ ))2(H (n,k) H (n,k)). n n n 2 1 2n − k=1 X 10 HENDRIKFLASCHE Since Riemann sums converge to Riemann integrals, we have 1 n λ2 µ2 δ λ2 µ2 sinδ lim + +λµcos k = + +λµ . n→∞ nk=1 2 2 (cid:18) n(cid:19)! 2 2 δ X For i = 1,2 we have that lim H (n,k) = H(0) = 1 uniformly in k = n i →∞ 1,2,...,n. Hence, 1 n λ2 µ2 δ C n (H (n,k) 1) + +λµcos k 6 H (n,k) 1 0 1 1 (cid:12)(cid:12)nkX=1 − 2 2 (cid:18) n(cid:19)!(cid:12)(cid:12) n kX=1| − | −→ (cid:12) (cid:12) a(cid:12)s n . The remaining term of the sum (cid:12) (cid:12) → ∞ (cid:12) n 1 R (λ,µ) 6 C H (n,k) H (n,k) 0 n 2 1 | | 2n | − | −→ k=1 X goes to 0 for all fixed λ,µ, as n . Therefore we have → ∞ λ2+µ2 sin(δ) (11) S (λ,µ) := lim S (λ,µ) = iu(λ+µ)+ +λµ n ∞ n − 2 δ →∞ and ϕ (λ,µ) := exp( S (λ,µ)) is nothing but the characteristic function of (Z(∞0),Z(δ)). This i−mp∞lies the statement. (cid:3) 3.2. The Gaussian case. Denote by N˜ [α,β] the analogue of N [α,β] for ∗ n∗ the process Z, that is 1 1 (12) N˜ [α,β] := sgn(Z(α)Z(β)). ∗ 2 − 2 Lemma 7. As δ 0, we have ↓ 1 u2 (13) EN˜ [0,δ] = exp δ+o(δ). ∗ π√3 − 2 ! Proof. The bivariate random vector (Z(0),Z(δ)) is normal-distributed with mean (u,u) and covariance ρ =δ 1sinδ. We have − EN˜ [0,δ] = P(Z(0)Z(δ) < 0) ∗ = 2P(Z(0) u < u,Z(δ) u> u) − − − − 1 ρ2 u2 − exp ∼ p π − 2 ! asδ 0(equivalently, ρ 1),wherethelaststepwillbejustifiedinLemma8, ↓ ↑ below. Using the Taylor series of ρ 1sinρ which is given by − sin(δ) δ2 (14) = 1 +o(δ2) as δ 0, δ − 6 ↓ we obtain the required relation (13). (cid:3) Lemma 8. Let (X,Y) N(µ,Σ) be bivariate normal distributed with pa- ∼ rameters 0 1 ρ µ = und Σ= . 0 ρ 1 (cid:18) (cid:19) (cid:18) (cid:19) Let u R be arbitrary but fixed. Then, ∈ 1 ρ2 u2 P(X 6 u,Y > u) − exp as ρ 1. ∼ p 2π − 2 ! ↑

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.