ebook img

Absolute continuity and convergence of densities for random vectors on Wiener chaos PDF

0.27 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Absolute continuity and convergence of densities for random vectors on Wiener chaos

Absolute continuity and convergence of densities for random vectors on Wiener chaos Ivan Nourdin, David Nualart and Guillaume Poly ∗ † ‡ 2 July 24, 2012 1 0 2 l u J Abstract 1 2 The aim of this paper is to establish some new results on the absolute continuity and the convergence in total variation for a sequence of d-dimensional vectors whose ] R components belong to a finite sum of Wiener chaoses. First we show that the prob- P ability that the determinant of the Malliavin matrix of such vectors vanishes is zero . h or one, and this probability equals to one is equivalent to say that the vector takes t values in the set of zeros of a polynomial. We provide a bound for the degree of this a m annihilating polynomial improving a result by Kusuoka [8]. On the other hand, we [ showthattheconvergence inlawimpliestheconvergence intotalvariation, extending to the multivariate case a recent result by Nourdin and Poly [11]. This follows from 1 v an inequality relating the total variation distance with the Fortet-Mourier distance. 5 Finally, applications tosomeparticularcasesarediscussedandseveralopenquestions 1 are listed. 1 5 . 7 1 Introduction 0 2 1 : Thepurposeofthispaperistoestablishsomenewresultsontheabsolutecontinuityandthe v convergence ofthe densities insome Lp(Rd) fora sequence of d-dimensionalrandomvectors i X whose components belong to finite sum of Wiener chaos. These result generalize previous r a works by Kusuoka [8] and by Nourdin and Poly [11], and are based on a combination of the techniques of Malliavin calculus, the Carbery-Wright inequality and some recent work on algebraic dependence for a family of polynomials. ∗Institut Élie Cartan, Université de Lorraine, BP 70239, 54506 Vandoeuvre-lès-Nancy, France, [email protected]; IN is partially supported by the ANR grants ANR-09-BLAN-0114 and ANR-10- BLAN-0121. †Department of Mathematics, University of Kansas, Lawrence, Kansas, 66045 USA, [email protected];DN is supported by the NSF grant DMS-1208625. ‡Laboratoire d’Analyse et de Mathématiques Appliquées, UMR 8050, Université Paris-Est Marne- la-Vallée, 5 Bld Descartes, Champs-sur-Marne, 77454 Marne-la-Vallée Cedex 2, France, [email protected]. 1 Let us describe our main results. Given two d-dimensional random vectors F and G, we denote by d (F,G) the total variation distance between the laws of F and G, defined TV by d (F,G) = sup P(F A) P(G A) , TV | ∈ − ∈ | A∈B(Rd) where the supremum is taken over all Borel sets A of Rd. On the other hand, we denote by d (F,G) the Fortet-Mourier distance, given by FM d (F,G) = sup E[φ(F)] E[φ(G)] , FM | − | φ where the supremum is taken over all 1-Lipschitz functions φ : Rd R which are bounded → by 1. It is well-known that d metrizes the convergence in distribution. FM Consider a sequence of random vectors F = (F ,...,F ) whose components belong n 1,n d,n to q , where stands for the kth Wiener chaos, and assume that F converges in ⊕k=0Hk Hk n distribution towards a random variable F . Denote by Γ(F ) the Malliavin matrix of F , ∞ n n and assume that E[detΓ(F )] is bounded away from zero. Then we prove that there exist n constants c,γ > 0 (depending on d and q) such that, for any n > 1, d (F ,F ) 6 cd (F ,F )γ. (1.1) TV n ∞ FM n ∞ So, our result implies that the sequence F converges not only in law but also in total n variation. In [11] this result has been proved for d = 1. In this case γ = 1 , and one only 2q+1 needs that F is not identically zero, which turns out to be equivalent to the fact that ∞ the law of F is absolutely continuous. This equivalence is not true for d > 2. The proof ∞ of this result is based on the Carbery-Wright inequality for the law of a polynomial on Gaussian random variables and the also on the integration-by-parts formula of Malliavin calculus. In the multidimensional case we make use of the integration-by-parts formula based on the Poisson kernel developed by Bally and Caramelino in [1]. The convergence in total variation is equivalent to the convergence of the densities in L1(Rd). Weimprove this results proving that under theabove assumptions onthesequence F , the densities converge in Lp(Rd) for some explicit p > 1 depending solely on d and q. n Motivated by the above inequality, in the first part of the paper we discuss the absolute continuity ofthelawofa d-dimensionalrandomvector F = (F ,...,F ) whose components 1 d belong to a finite sum of Wiener chaoses q . Our main result says that the three ⊕k=0Hk following conditions are equivalent: 1. The law of F is not absolutely continuous with respect to the Lebesgue measure on Rd. 2. There exists a nonzero polynomial H in d variables of degree at most dqd−1 such that H(F) = 0. 3. E[detΓ(F)] = 0. 2 Notice that the criterion of the Malliavin calculus for the absolute continuity of the law of a random vector F says that detΓ(F) > 0 almost surely implies the absolute continuity of the law of F. We prove the stronger result that P(detΓ(F) = 0) is zero of one; as a consequence, P(detΓ(F) > 0) = 1 turns out to be equivalent to the absolute continuity. The equivalence with condition 2 improves a classical result by Kusuoka ([8]), in the sense that we provide a simple proof of the existence of the annihilating polynomial based on a recent result by Kayal [7] and we give an upper bound for the degree of this polynomial. Also, it is worthwhile noting that, compared to condition 2, condition 3 is often easier to check in practical situations, see also the end of Section 3. The paper is organized as follows. Section 2 contains some preliminary material on Malliavin calculus, the Carbery-Wright inequality and the results on algebraic dependence that will be used in the paper. In Section 3 we provide equivalent conditions for absolute continuity in the case of a random vector in a sum of Wiener chaoses. Section 4 is devoted to establish the inequality (1.1), and also the convergence in Lp(Rd) for some p. Section 5 contains applications of these results in some particular cases. Finally, we list two open questions in Section 6. 2 Preliminaries This section contains some basic elements on Gaussian analysis that will be used through- out this paper. We refer the reader to the books [10, 13] for further details. 2.1 Multiple stochastic integrals Let H be a real separable Hilbert space. We denote by X = X(h),h H an isonormal { ∈ } Gaussian process over H. That means, X isa centered Gaussianfamilyof randomvariables defined in some probability space (Ω, ,P), with covariance given by F E[X(h)X(g)] = h,g , H h i for any h,g H. We also assume that is generated by X. ∈ F For every k 1, we denote by the kth Wiener chaos of X defined as the closed k ≥ H linear subspace of L2(Ω) generated by the family of random variables H (X(h)),h k { ∈ H, h = 1 , where H is the kth Hermite polynomial given by H k k k } Hk(x) = (−1)kex22 ddxkk e−x22 . (cid:16) (cid:17) We write by convention = R. For any k > 1, we denote by H⊗k the kth tensor product 0 H of H. Then, the mapping I (h⊗k) = H (X(h)) can be extended to a linear isometry k k between the symmetric tensor product H⊙k (equipped with the modified norm √k! ) H⊗k k·k and the kth Wiener chaos . For k = 0 we write I (x) = c, c R. In the particular k 0 H ∈ case H = L2(A, ,µ), where µ is a σ-finite measure without atoms, then H⊙k coincides A 3 with the space L2(µk) of symmetric functions which are square integrable with respect to s the product measure µk, and for any f H⊙k the random variable I (f) is the multiple k ∈ stochastic integral of f with respect to the centered Gaussian measure generated by X. Any random variable F L2(Ω) admits an orthogonal decomposition of the form ∈ F = ∞ I (f ), where f = E[F], and the kernels f H⊙k are uniquely determined by k=0 k k 0 k ∈ F. P Let e ,i > 1 be a complete orthonormal system in H. Given f H⊙k and g H⊙j, i { } ∈ ∈ for every r = 0,...,k j, the contraction of f and g of order r is the element of H⊗(k+j−2r) ∧ defined by ∞ f g = f,e e g,e e . ⊗r h i1 ⊗···⊗ iriH⊗r ⊗h i1 ⊗···⊗ iriH⊗r i1,.X..,ir=1 The contraction f g is not necessarily symmetric, and we denote by f g its sym- r r ⊗ ⊗ metrization. e 2.2 Malliavin calculus Let be the set of all cylindrical random variables of the form S F = g(X(h ),...,X(h )), 1 n where n > 1, h H, and g is infinitely differentiable such that all its partial derivatives i ∈ have polynomial growth. The Malliavin derivative of F is the element of L2(Ω;H) defined by n ∂g DF = (X(h ),...,X(h ))h . 1 n i ∂x i i=1 X By iteration, for every m > 2, we define the mth derivative DmF which is an element of L2(Ω;H⊙m). For m > 1 and p > 1, Dm,p denote the closure of with respect to the norm S defined by m,p k·k m F p = E[ F p]+ E DjF p . k km,p | | k kH⊗j j=1 X (cid:2) (cid:3) We also set D∞ = Dm,p. m>1 p>1 ∩ ∩ As a consequence of the hypercontractivity property of the Ornstein-Uhlenbeck semi- group, all the -norms are equivalent in a finite Wiener chaos. This is a basic result m,p k·k that will be used along the paper. We denote by δ the adjoint of the operator D, also called the divergence operator. An element u L2(Ω;H) belongs to the domain of δ, denoted Domδ, if E DF,u 6 H ∈ | h i | 4 c F foranyF D1,2, where c isaconstant depending onlyonu. Then, therandom u L2(Ω) u k k ∈ variable δ(u) is defined by the duality relationship E[Fδ(u)] = E DF,u . (2.2) H h i Given a random vector F = (F ,...,F ) such that F D1,2, we denote Γ(F) the Malliavin 1 d i ∈ matrix of F, which is a random nonnegative definite matrix defined by Γ (F) = DF ,DF . i,j i j H h i If F D2,2 and detΓ(F) > 0 almost surely, then the law of F is absolutely continuous ∈ with respect to the Lebesgue measure on Rd (see, for instance, [13, Theorem 2.1.1]). This is our basic criterion for absolute continuity in this paper. 2.3 Carbery-Wright inequality Along the paper we will make use of the following inequality due to Carbery and Wright [4]: there is a universal constant c > 0 such that, for any polynomial Q : Rn R of degree → at most d and any α > 0 we have E[Q(X1,...,Xn)2]21dP( Q(X1,...,Xn) 6 α) 6 cdαd1, (2.3) | | where X ,...,X are independent random variables with law N(0,1). 1 n 2.4 Algebraic dependence Let F be a field and f = (f ,...,f ) F[x ,...,x ] be a set of k polynomials of degree at 1 k 1 n ∈ most d in n variables in the field F. These polynomials are said to be algebraically depen- dent if there exists a nonzero k-variate polynomial A(t ,...,t ) F[t ,...,t ] such that 1 k 1 k ∈ A(f ,...,f ) = 0. The polynomial A is then called an (f ,...,f )-annihilating polynomial. 1 k 1 k Denote by ∂f i Jf = ∂x (cid:18) j(cid:19)16i6k,16j6n the Jacobian matrix of the set of polynomials in f. A classical result (see, e.g., Ehrenborg and Rota [6] for a proof) says that f ,...,f are algebraically independent if and only if 1 k the Jacobian matrix Jf has rank k. Suppose that the polynomials f = (f ,...,f ) are algebraically dependent. Then the 1 k set of f-annihilating polynomials forms an ideal in the polynomial ring F[t ,...,t ]. In a 1 k recent work Kayal (see [7]) has established some properties of this ideal. In particular (see [7], Lemma 7) he has proved that if no proper subset of f is algebraically dependent, then the ideal of f-annihilating polynomials is generated by a single irreducible polynomial. On the other hand (see [7], Theorem 11) the degree of this generator is at most kqk−1. 5 3 Absolute continuity of the law of a system of multiple stochastic integrals The purpose of this section is to extend a result by Kusuoka [8] on the characterization of the absolute continuity of a vector whose components are finite sums of multiple stochastic integrals, using techniques of Malliavin calculus. Theorem 3.1 Fix q,d > 1, and let F = (F ,...,F ) be a random vector such that F 1 d i ∈ q for any i = 1,...,d. Let Γ := Γ(F) be the Malliavin matrix of F. Then the k=0Hk following assertions are equivalent: L (a) The law of F is not absolutely continuous with respect to the Lebesgue measure on Rd. (b) There exists H R[X ,...,X ] 0 of degree at most D = dqd−1 such that, almost 1 d ∈ \{ } surely, H(F ,...,F ) = 0. 1 d (c) E[detΓ] = 0. Proof of (a) (c). Let us prove (c) (a). Set N = 2d(q 1) and let e ,k > 1 be an k ⇒ ¬ ⇒ ¬ − { } orthonormalbasisofH. SincedetΓ N ,thereexistsasequence Q ,n > 1 ofreal- ∈ k=0Hk { n } valuedpolynomialsofdegreeatmostN suchthattherandomvariablesQ (I (e ),...,I (e )) n 1 1 1 n L converge in L2(Ω) and almost surely to detΓ as n tends to infinity (see [11, Theorem 3.1, first step of the proof] for an explicit construction). Assume now that E[detΓ] > 0. Then for n > n , E[ Q (I (e ),...,I (e )) ] > 0. We deduce from the Carbery-Wright’s inequal- 0 n 1 1 1 n | | ity (2.3) the existence of a universal constant c > 0 such that, for any n > 1, P( Q (I (e ),...,I (e ) 6 λ) 6 cNλ1/N(E[Q (I (e ),...,I (e )2])−1/2N. n 1 1 1 n n 1 1 1 n | | Using the property E[Q (I (e ),...,I (e )2] > (E[ Q (I (e ),...,I (e ) ])2 n 1 1 1 n n 1 1 1 n | | we obtain P( Q (I (e ),...,I (e ) 6 λ) 6 cNλ1/N(E[ Q (I (e ),...,I (e )) ])−1/N, n 1 1 1 n n 1 1 1 n | | | | and letting n tend to infinity we get P(detΓ 6 λ) 6 cNλ1/N(E[detΓ])−1/N. (3.4) Letting λ 0, we get that P(detΓ = 0) = 0. As an immediate consequence of absolute → continuity criterion, (see, for instance, [13, Theorem 2.1.1]) we get the absolute continuity of the law of F, and assertion (a) does not hold. 6 It isworthwhile notingthat, inpassing, we have provedthatP(detΓ = 0)iszero orone. Proof of (b) (a). Assume the existence of H R[X , ,X ] 0 such that, almost 1 d ⇒ ∈ ··· \ { } surely, H(F ,...,F ) = 0. Since H 0, the zeros of H constitute a closed subset of Rd 1 d 6≡ with Lebesgue measure 0. As a result, the vector F cannot have a density with respect to the Lebesgue measure. Proof of (c) (b). Let e ,k > 1 be an orthonormal basis of H, and set G = I (e ) for k k 1 k ⇒ { } any k > 1. In order to illustrate the method of proof, we are going to deal first with the finite dimensional case, that is, when F = P (G ,...,G ), i = 1,...,d, and for each i, i i 1 n P R[X ,...,X ] is a polynomial of degree at most q. In that case, i 1 n ∈ n ∂P ∂P i k DF ,DF = (G ,...,G ) (G ,...,G ), i k H 1 n 1 n h i ∂x ∂x j j j=1 X and the Malliavin matrix Γ of F can be written as Γ = AAT, where ∂P i A = (G ,...,G ) . 1 n ∂x (cid:18) j (cid:19)16i6d,16j6n As a consequence, taking into account that the support of the law of (G ,...,G ) is Rn, 1 n if detΓ = 0 almost surely, then the Jacobian ∂Pi(y ,...,y ) has rank strictly less ∂xj 1 n d×n than d for all (y ,...,y ) Rn. Statement (b)(cid:16)is then a conse(cid:17)quence of Theorem 2 and 1 n ∈ Theorem 11 in [7]. Consider now the general case. Any symmetric element f H⊗p can be written as ∈ ∞ f = a e ... e . k1,...,kp k1 ⊗ ⊗ kp k1,.X..,kp=1 Setting p = # j : k = k , the multiple stochastic integral of e ... e can be written k { j } k1⊗ ⊗ kp in terms of Hermite polynomials as ∞ I (e ... e ) = H (G ), p k1 ⊗ ⊗ kp pk k k=1 Y where the above product is finite. Thus, ∞ ∞ I (f) = a H (G ), p k1,...,kp pk k k1,.X..,kp=1 Yk=1 where the series converges in L2. This implies that we can write I (f) = P(G ,G ,...) (3.5) p 1 2 7 where P : RN R is a function defined ν⊗N-almost everywhere, with ν the standard → normal distribution. In other words, we can consider I (f) as a random variable defined p in the probability space (RN,ν⊗N). On the other hand, for any n > 1 and for almost all y ,y ,... in R, the function (y ,...,y ) P(y ,y ,...) is a polynomial of degree at n+1 n+2 1 n 1 2 7→ most p. By linearity, from the representation (3.5) we deduce the existence of mappings P ,...,P : RN R, defined ν⊗N almost everywhere, such that for all i = 1,...,d, 1 d → F = P (G ,G ,...), (3.6) i i 1 2 and such that for all n > 1 and almost all y ,y ,... in R, the mapping (y ,...,y ) n+1 n+2 1 n 7→ P (y ,y ,...) is a polynomial of degree at most q. With this notation, the Malliavin matrix i 1 2 Γ can be expressed as Γ = AAT, where ∂P i A = (G ,G ,...) . 1 2 ∂x (cid:18) j (cid:19)16i6d,j>1 Consider the truncated Malliavin matrix Γ = A AT, where n n n ∂P i A = (G ,G ,...) . n 1 2 ∂x (cid:18) j (cid:19)16i6d,16j6n From the Cauchy-Binet formula detΓ = det(A AT) = (detA )2, n n n J J={j1,...,Xjd}⊂{1,...,n} where for J = j ,...,j , 1 d { } ∂P i A = (G ,G ,...) , J 1 2 ∂x (cid:18) j (cid:19)16i6d,j∈J we deduce that detΓ is increasing and it converges to detΓ. Therefore, if detΓ = 0 almost n surely, then for each n > 1, detΓ = 0 almost surely. n Suppose that E[detΓ] = 0, which implies that detΓ = 0 almost surely. Then, for all n > 1, detΓ = 0 almost surely. We can assume that for any subset F ,...,F of the n { i1 ir} random variables F ,...,F we have 1 d { } E[detΓ(F ,...,F )] = 0, i1 ir 6 because otherwise we will work with a proper subset of this family. This implies that for n > n , and for any subset F ,...,F , 0 { i1 ir} E[detΓ (F ,...,F )] = 0, n i1 ir 6 where Γ denotes the truncated Malliavin matrix defined above. Then, applying the n Carbery-Wright inequality we can show that the probability P(detΓ (F ,...,F ) = 0) is n i1 ir zero of one, so we deduce detΓ (F ,...,F ) > 0 almost surely. n i1 ir 8 Fix n > n . We aregoing to apply the results by Kayal (see [7]) to the family of random 0 polynomials P(n)(y ,...,y ) = P (y ,...,y ,G ,G ,...), 1 6 i 6 d. i 1 n i 1 n n+1 n+2 We can consider these polynomials as elements of the ring of polynomials K[y ,...,y ], 1 n where K is the field generated by all multiple stochastic integrals. This field is well defined because by a result of Shigekawa [14] if F and G are finite sums of multiple stochastic integrals and G 0, then G is different from zero almost surely and F is well defined. The 6≡ G Jacobian of this set of polynomials (n) ∂P J(y ,...,y ) = i (y ,...,y ) 1 n 1 n ∂y j ! 16i6d,16j6n satisfies J(G ,...,G ) = A almost surely, and, therefore, it has determinant zero al- 1 n n (n) (n) most surely. Furthermore, for any proper subfamily of polynomials P ,...,P , the { i1 ir } corresponding Jacobian has nonzero determinant. As a consequence of the results by Kayal, there exists a nonzero irreducible polynomial H F[x ,...,x ] of degree at most n 1 d ∈ D := dqd−1, which satisfies the following properties: (i) The coefficients of H are random variables measurable with respect to the σ-field n σ G ,G ,... . n+1 n+2 { } (ii) The coefficient of the largest monomial in antilexicographic order occurring in H is n 1. (iii) For all y ,...,y R, 1 n ∈ (n) (n) H (P (y ,...,y ),...,P (y ,...,y )) = 0 n 1 1 n d 1 n almost surely. (iv) If A F[x ,...,x ] satisfies 1 d ∈ (n) (n) A(P (y ,...,y ),...,P (y ,...,y )) = 0 1 1 n d 1 n almost surely, then A is a multiple of H , almost surely. n If we apply property (iii) to n+1 and substitute y by G we obtain n+1 n+1 (n+1) (n+1) H (P (y ,...,y ,G ),...,P (y ,...,y ,G )) = 0. n+1 1 1 n n+1 d 1 n n+1 From property (iv) and taking into account that for any 1 6 i 6 d, (n+1) (n) P (y ,...,y ,G ) = P (y ,...,y ) i 1 n n+1 i 1 n 9 almost surely, we deduce that H is a multiple of H almost surely. Using the fact n+1 n that H is irreducible and normalized we deduce that H = H almost surely for n+1 n n+1 any n > n . These coefficients of these polynomials are random variables, but, in view 0 of condition (i), and using the 0 1 Kolmogorov law we obtain that the coefficients are − deterministic. Thus, there exists a polynomial H R[X ,...,X ] 0 of degree at most 1 d ∈ \{ } D = dqd−1 such that H(F ,...,F ) = 0 almost surely. 1 d The condition E[detΓ] > 0 can be translated into a condition on the kernels of the multiple integrals appearing in the expansion of each component of the random vector F. Consider the following simple particular cases. Example 1. Let (F,G) = I (f),I (g)), with q > 1. Let Γ be the Malliavin matrix of 1 q (F,G). Let us compute E[detΓ]. Applying the duality relationship (2.2) and the fact that (cid:0) δ(DG) = LG = qG, where L is the Ornstein-Uhlenbeck operator, we deduce − E[ DG 2] = E[Gδ(DG)] = qE[G2] = qq! g 2 , H H⊗q k k k k so that E[detΓ] = f 2E[ DG 2] E[ f,DG 2] = f 2E[ DG 2] q2E[I (f g)2] H H H H H q−1 1 k k k k − h i k k k k − ⊗ = qq! f 2 g 2 f g 2 . k kHk kH⊗q −k ⊗1 kH⊗(q−1) We deduce that E[det(cid:0)Γ] > 0 if and only if f 1 g(cid:1) H⊗(q−1) < f H g H⊗q. Notice that k ⊗ k k k k k when q = 1 the above formula for E[detΓ] reduces to E[detΓ] = detC, where C is the covariance matrix of (F,G). Example 2. Let (F,G) = I (f),I (g)), with q > 2. Let Γ be the Malliavin matrix of 2 q (F,G). Let us compute E[detΓ]. We have (cid:0) q 2 q 2 q 1 q DG 2 = q2 (r 1)! − I (g g) = rr! I (g g), H 2q−2r r 2q−2r r k k − r 1 ⊗ r ⊗ r=1 (cid:18) − (cid:19) r=1 (cid:18) (cid:19) X X so that DF,DG = 2q I (f g)+(q 1)I (f g) H q 1 q−2 2 h i ⊗ − ⊗ DF 2 = 4 f 2 +4I (f f) H (cid:0) H⊗2 2 1 (cid:1) k k k k ⊗ q 2 q DG 2 = qq! g 2 +(q 1)qq!I (g g)+ rr! I (g g). k kH k kH⊗q − 2 ⊗q−1 r 2q−2r ⊗r r=3 (cid:18) (cid:19) X We deduce E[detΓ] = E[ DF 2 DG 2] E[ DF,DG 2] H H H k k k k − h i = 4qq! f 2 g 2 +8(q 1)qq! f f,g g 4q2q! f g 2 H⊗2 H⊗q 1 q−1 H⊗2 1 H⊗q k k k k − h ⊗ ⊗ i − k ⊗ k 4q(q 1)q! f g 2 − − k ⊗2 kH⊗(q−2) e = 4qq! f 2 g 2 +8(q 1)qq! f g 2 4q2q! f g 2 H⊗2 H⊗q 1 H⊗q 1 H⊗q k k k k − k ⊗ k − k ⊗ k 4q(q 1)q! f g 2 . (3.7) − − k ⊗2 kH⊗(q−2) e 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.