ebook img

Martingale Expansion in Mixed Normal Limit PDF

0.4 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Martingale Expansion in Mixed Normal Limit

Martingale Expansion in Mixed Normal Limit ∗ Nakahiro YOSHIDA University of Tokyo Graduate School of Mathematical Sciences, 3-8-1 Komaba, Meguro-ku, Tokyo153, Japan. e-mail: [email protected] 3 1 September 14, 2010, 0 Revised version 1: March 14, 2012, 2 Revised version 2: October 1, 2012, n a Revised version 3: December 31, 2012 J 3 ] T Summary The quasi-likelihood estimator and the Bayesian type estimator of the volatility parameter are in S generalasymptotically mixed normal. In case the limit is normal,the asymptotic expansionwas derivedin [28] h. asanapplicationofthe martingaleexpansion. The expansionfor the asymptoticallymixednormaldistribution t is then indispensable to develop the higher-order approximation and inference for the volatility. The classical a approaches in limit theorems, where the limit is a process with independent increments or a simple mixture, m do not work. We presentasymptotic expansionofa martingale with asymptoticallymixed normaldistribution. [ The expansionformulais expressedby the adjointof a randomsymbolwith coefficients describedby the Malli- 3 avincalculus, differently fromthe standardinvarianceprinciple. Applications to aquadraticformofa diffusion v process (“realized volatility”) are discussed. 0 8 Keywords and phrases Asymptotic expansion, martingale, mixed normal distribution, Malliavin calculus, ran- 6 3 dom symbol, double Itoˆ integral, quadratic form. . 0 1 2 1 Introduction 1 : v Theasymptoticexpansionisatooltogiveapreciseapproximationtotheprobabilitydistribution. Ascommonly i X well known, it is the basement of the contemporary fields in theoretical statistics such as the asymptotic r decision theory, prediction, information criterion, bootstrap and resampling methods, information geometry, a and stochastic numerical analysis with applications to finance, as well as the higher-order approximation of distributions. The methodology of the asymptotic expansion has been well established, and historically well developed especially for independent observations ([2]). For stochastic processes, there are two principles of asymptotic expansion. The first one is the mixing approach. Itisalsocalledthelocal approach becauseittakesadvantageofthe factorizationofthecharacteristic function by the Markovian property more or less, and corresponds to the classical asymptotic expansion for independent models. A compilation of the studies for Markovian or near Markovian chains is G¨otze and Hipp [5]. Thelocalnondegeneracy,thatis,thedecayofeachfactorofthecharacteristicfunctionbecomesanessential ∗ThisworkwasinpartsupportedbyGrants-in-AidforScientificResearchNo. 19340021,No. 24340015(ScientificResearch),No. 24650148(ChallengingExploratoryResearch);theGlobalCOEprogram“TheResearchandTrainingCenterforNewDevelopment inMathematics”oftheGraduateSchool ofMathematical Sciences,UniversityofTokyo; JSTBasicResearchProgramsPRESTO; andbyaCooperativeResearchProgramoftheInstituteofStatisticalMathematics. TheauthorthanksNSSolutionsCorporation foritssupport. Themainpartsinthis paper werepresented atWorkshopon“Finance andRelatedMathematical andStatistical Issues”, September 3-6, 2008, Kyoto Research Park, Kyoto, International conference “Statistique Asymptotique des Processus Stochastiques VII”, Universit´e du Maine, Le Mans, March 16-19, 2009, MSJ Spring Meeting 2010, March 24-27, 2010, Keio University,Mathematical Society ofJapan, andInternational conference “DYNSTOCHMeeting2010”, Angers,June16-19,2010. Theauthor thankstotheorganizersofthemeetingsforopportunities ofthetalks. 1 problem, and G¨otze and Hipp [6] showed it for time series models. For continuous time, Kusuoka and Yoshida [12]andYoshida[30]treatedtheǫ-Markovianprocesstogiveasymptoticexpansionunderthemixingcondition. ThenthelocalnondegeneracyoftheMalliavincovarianceworksforobjectsexpressedbythestochasticanalysis. Though this method is relatively new, there are already not a few applications: expansion for a functional of the ǫ-Markov processes, statistical estimators, information criterion, stochastic volatility model, empirical distribution ([20], [24], [14], [13] among others). Although the local approach is more efficient when one treats mixing processes, the asymptotic expansion for the ergodic diffusion was first derived through the martingale expansion ([28], [19], [29]). When the strong mixing coefficient decays sufficiently fast, under suitable moments conditions, the functionals appearing as the higher-order terms in the stochastic expansion of the functional bear a jointly normal limit distribution and it yields a classical expansion formula each of whose terms is the Hermite polynomial times the normal density. On the other hand, if we consider the sum of quadratics of the increments of a diffusion process, it is observed that in the higher-order, the variables have a non-Gaussian limit even when the sum is asymptotically normal in the first order. Such phenomenon is observedin the estimation of the statistical parameter in the essentially linearly parametrized diffusion coefficient. Due to the non-Gaussianity, we cannot apply the mixing approach in this case. Nonetheless, the martingale expansion can apply to obtain the expansion for the estimator of the volatility parameter; see [28]. This example shows that the martingale expansion is not inferior to the mixing method but even superior in some situations. Estimationforthediffusioncoefficienthasbeenattractingstatisticians’interests. Therearemanystudiesin theoretical statistics such as [3], [17], [18], [27], [11], [22], [8], [23], [21] among others. The estimators, including the so-called the realized volatility, are in general asymptotically mixed normal; see for example [3], [4], [7], [25]. Today vast literature about this topic is available around financial data analysis. We refer the reader to [1]and [16]and referencestherein for recentadvancesin the first-orderasymptotic theory andaccessto related papers. The statistical theory for mixed limits is called the non-ergodic statistics since the Fisher information and the observed information are random even in the limit, differently from the classical cases where those are a constant. In this sense, it may be said that the mixing approach belongs to the classical theory. As for the asymptotic expansion in non-ergodic statistics, it seems that there is room for study. In order to explain the technical difficulties in this question, let us recall the method in the proof of the martingalecentrallimittheoremandthe classicalmartingaleexpansion. LetMn =(Mn) be acontinuous t t∈[0,1] martingale;it is possible to consider more generallocalmartingalebut the existence ofjumps for example does not change the situation essentially. Under the condition that Mn p 1 as n , it suffices to show that 1 h i → →∞ E[eiuM1n]e21u2 =E[eiuM1n+12u2] 1 → as n for every u R. Since for large n, →∞ ∈ E[eiuM1n+12u2] = E[eiuM1n+21u2hMni1e−12u2(hMni1−1)] E[eiuM1n+21u2hMni1] ∼ = 1 by the martingale property of the exponential functional if necessary by suitable localization. Thus we obtain the centrallimit theorem. Roughly speaking,if evaluating the gap Mn 1 moreprecisely,we canobtainthe 1 h i − asymptotic expansion. In this standard proof in the classical theory of the limit theorems for semimartingales converging to a process with independent increments, the commutativity of the expectation and (M∞)(u)−1, E 1 where (M∞)(u) is the characteristic function of the limit of Mn, was essential. However, when it is random, E 1 1 this method does not work. It explains the difficulty with the asymptotic expansion in the limit with random characteristics. As already mentioned, the asymptotic expansion is inevitable to form modern theories in the non-ergodic statistics, and so it seems natural to try to extend the classical theory of asymptotic expansion in the ergodic statistics to a theory that is applicable to the non-ergodic statistics. The aim of this article is to present a new martingale expansion to answer this question. As a prototype problem, in Section 8, for a diffusion process satisfying the Itoˆ integral equation t t X = X + b(X )ds+ σ(X )dw (1) t 0 s s s Z0 Z0 2 we will consider a quadratic form of the increments of X with a strongly predictable kernel: n U = c(X )(∆ X)2, (2) n tj−1 j j=1 X where c is a function, ∆ X = X X and t = j/n. Under suitable conditions, U has the in-probability j tj − tj−1 j n limit 1 U = c(X )σ(X )2ds. (3) ∞ s s Z0 Then the problem is to derive second-order approximation of the distribution of ζ = √n(U U ). When n n ∞ − c 1, this gives asymptotic expansion of the realized volatility. It turns out that ζ admits a stochastic n ≡ expansion with a double stochastic integral perturbed by higher-order terms. In Section 6, we will present an expansion formula for a general perturbed double stochastic integral. When the function σ of (1) involves an unknown parameter θ and b(X ) is unobservable, this gives a semiparametric estimation problem. The error of t the quasimaximum likelihoodestimator θˆ ofθ has a representationby a perturbed double stochastic integral, n the kernelofwhichis differentfromthatofthe realizedvolatility. Ourresultprovidesasymptoticexpansionfor θˆ though we do not go into this question here. Access Section 6.3, Section 7.1 or Section 8 first if the reader n wants to know quickly the results in typical problems of a quadratic form, while the results presented in the preceding sections are more general. The organization of this paper is as follows. In Section 2, we define our object and prepare some notation. We introduce the notion of random symbols and define an adjoint operation of random symbols in Section 3. The second-order part in the asymptotic expansion consists of two terms and they are represented by certain randomsymbols. Thefirstone(adaptivesymbol)correspondstothesecond-ordercorrectiontermoftheclassical martingale expansion in [28], while the second one (anticipative symbol) is new. Since the characteristics of the targeted distribution are random, it is natural to consider symbols with randomness. An error bound of the asymptotic expansionwill be presented in Section 4. At this stage, the expressionof the second-orderterm associated with the anticipative random symbol is not explicit yet. In Section 6, we treat a variable whose principal part is given by a sum of double stochastic integrals. It is natural to consider such a variable because it appears in the context of the statisticalinference for diffusion coefficients and realizedvolatility. It turns out that the anticipative random symbol involves objects from the infinite dimensional calculus. Section 7 gives expansion for a simple quadratic form of the increments of a Brownian motion. Studentization procedure is also discussed there. The arguments on the nondegeneracy will be applied in Section 8, where we derive the asymptoticexpansionforaquadraticformoftheincrementsofadiffusionprocess. Preciseapproximationtothe realizedvolatilityinfinanceisoneofapplicationsofourresultthoughouraimis todevelopanewmethodology in the theory of limit theorems. 2 Functionals Let (Ω, ,F = ( ) ,P) be a stochastic basis with = . On Ω, we will consider a sequence of d- t t∈[0,1] 1 F F F F dimensional functionals each of which admits the decomposition Z = M +W +r N . n n n n n For every n N, Mn =(Mn) denotes a d-dimensional martingale with respect to F and M denotes the ∈ t t∈[0,1] n terminal variable of Mn, i.e., Mn = M1n. In the above decomposition, Wn, Nn ∈ F(Ω;Rd)1 and (rn)n∈N is a sequence of positive numbers tending to zero as n . We will essentially treat a conditional expectat→ion∞given a certain functional Fn (Ω;Rd1), n N. Later ∈F ∈ we will specify those functionals more precisely to validate computations involving conditional expectations. We write M =M∞, Cn = Mn , C = Mn . ∞ 1 t h it n h i1 1Thesetofd-dimensionalmeasurablemappings. 3 Process M∞ will be specified later. Let C∞ (Ω;Rd Rd), W∞ (Ω;Rd) and F∞ (Ω;Rd1). The ∈ F ⊗ ∈ F ∈ F tangent random vectors are given by ◦ Cn = rn−1(Cn−C∞), ◦ Wn = rn−1(Wn−W∞), ◦ Fn = rn−1(Fn−F∞). Consider an extension ◦ ◦ ◦ (Ω¯, ¯,P¯)=(Ω Ω, ,P P) F × F ×F × ◦ ◦ ◦ ◦ of (Ω, ,P) by a probability space (Ω, ,P). Let M∞ (Ω¯;C([0,1];Rd)), N∞ (Ω¯;Rd), C∞ (Ω¯;Rd ◦F ◦ F ∈F ∈F ∈F ⊗ Rd), W∞ (Ω¯;Rd) and F∞ (Ω¯;Rd1). ∈F ∈F We assume 2 ◦ ◦ ◦ ◦ ◦ ◦ [B1] (i) (Mn,Nn,Cn,Wn,Fn) ds(F) (M∞,N∞,C∞,W∞,F∞). → (ii) M∞ =N (0,C∞). L{ t |F} d t Theconvergencein(i)is -stableconvergence. Inparticular,(C ,W ,F ) p (C ,W ,F ),thevariables n n n ∞ ∞ ∞ F → C , W and F are -measurable, and ∞ ∞ ∞ F ◦ ◦ ◦ ◦ ◦ ◦ (Mn,Nn,Cn,Wn,Fn,Cn,Wn,Fn) d (M∞,N∞,C∞,W∞,F∞,C∞,W∞,F∞). → Let ˇ = σ[M∞]. Then there exists a measurable mapping Cˇ :Ω Rd Rd Rd such that F F ∨ 1 ∞ × → ⊗ ◦ Cˇ∞(ω,M∞) = E[C∞ ˇ] |F and we simply write it as Cˇ (M ). The uniqueness of the mapping Cˇ is not necessary in what follows. In ∞ ∞ ∞ the same way, we define Wˇ (ω,z), Fˇ (ω,z) and Nˇ (ω,z) so that ∞ ∞ ∞ ◦ Wˇ∞(ω,M∞) = E[W∞ ˇ] |F ◦ Fˇ∞(ω,M∞) = E[F∞ ˇ] |F Nˇ (ω,M ) = E[N ˇ]. ∞ ∞ ∞ |F Further, we introduce the notation C˜ (z) C˜ (ω,z) := Cˇ (ω,z W ) ∞ ∞ ∞ ∞ ≡ − and similarly W˜ (ω,z), F˜ (ω,z) and N˜ (ω,z). ∞ ∞ ∞ 3 Random symbol We will need a notion of random symbols to express the asymptotic expansion formula. Givenanr-dimensionalWienerspace(W,P)overtimeinterval[0,1],theCameron-Martinsubspaceisdenoted by H. We will assume that the probability space (Ω, ,P) admits the structure such that Ω = Ω′ W, = ′ B(W) and P = P′ P for some probabilityFspace (Ω′, ′,P′), and consider the partial Mall×iavin cFalculFus ⊗based on the shifts in×H. See Ikeda and Watanabe [9], NuFalart [15] for the Malliavin calculus. D ℓ,p denotes the Sobolev space on Ω with indices ℓ R and p (1, ). Let D = D . ℓ,∞ p≥2 ℓ,p Let ℓ,m Z . Denote by (ℓ,m) the set of∈functions c∈:Rd∞ D satisfyin∩g the following conditions: + ℓ,∞ ∈ C → 2[B1](i)=[R1](ii),[B1](ii)=[R1](iii)in[31]. 4 (i) c Cm(Rd;D ), that is, for every p 2, c : Rd D is Fr´echet differentiable m times in z and all the ℓ,∞ ℓ,p ∈derivatives (each element taking valu≥es in D )→up to order m are continuous. ℓ,∞ (ii) For every p 2, ≥ k∂zµc(z)kℓ,p ≤ Cℓ,p(1+|z|)Cℓ,p (z ∈Rd, µ∈Zd+with |µ|≤m) for some constant C depending on ℓ and p. ℓ,p Note that if c=c(z) does not depend on z, then this condition is reduced to that c < . The sum of the ℓ,p elements of α Zd, i.e., the length of α, is denoted by α. k k ∞ Let ℓ,m,n∈ Z++. A function ς : Ω Rd (iRd) (|i|Rd1) C is called a random symbol. We say that a random symbo∈l ς is of class (ℓ,m,n) if×ς adm×its a rep×resentatio→n ς(z,iu,iv) = c (z)(iu)mj(iv)nj (finite sum) (4) j j X for some c (ℓ, m ), m Zd (m m) and n Zd1 (n n). We denote by (ℓ,m,n) the set of random symj b∈olCs of|clajs|s (ℓ,jm,∈n).+ | j| ≤ j ∈ + | j| ≤ S The Malliavin covariance matrix of a multi-dimensional functional F is denoted by σ , and we write ∆ = F F detσ . Suppose that ℓ ℓ :=2([d /2]+1+[(n+1)/2])3 and the following conditions are satisfied: F 0 1 ≥ (i) F∞ Dℓ+1,∞(Rd1), W∞ Dℓ,∞(Rd) and C∞ Dℓ,∞(Rd Rd); ∈ ∈ ∈ ⊗ (ii) ∆−1, detC−1 Lp; F∞ ∞ ∈∩p≥2 (iii) ς (ℓ,m,n). ∈S For the random symbol ς (ℓ,m,n) taking the form of (4), we define the adjoint operator ς∗ of ς as follows. ∈ S It applies to φ(z;W ,C )δ (F ) as ∞ ∞ x ∞ ς(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) = ( ∂ )mj( ∂ )nj c (z)φ(z;W ,C )δ (F ) . (5) z x ∞ ∞ x ∞ z x j ∞ ∞ x ∞ − − n o Xj (cid:16) (cid:17) Here φ(z;µ,C) is the normal density with mean vector µ and covariance matrix C, and the derivatives are interpretedas the Fr´echetderivativesinthe space ofthe generalizedWiener functionals ofWatanabe ([26], [9]). By assumptions, ( ∂z)mjφ(z;W∞,C∞) Dℓ,∞ and ( ∂x)njδx(F∞)=(∂njδx)(F∞) D−ℓ,∞. pF∞ denotes the − ∈ − ∈ density of F . The generalized expectation of the formula (5) gives a formula with the usual expectation: ∞ E ς(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) z x ∞ ∞ x ∞ h n oi = ( ∂ )mj( ∂ )njE c (z)φ(z;W ,C )δ (F ) z x j ∞ ∞ x ∞ − − Xj h i = ( ∂ )mj( ∂ )nj E c (z)φ(z;W ,C ) F =x pF∞(x) z x j ∞ ∞ ∞ − − j (cid:18) (cid:19) X (cid:2) (cid:12) (cid:3) (cid:12) =: E ς(z,∂ ,∂ )∗ φ(z;W ,C ) F =x pF∞(x) . z x ∞ ∞ ∞ (cid:20) (cid:26) (cid:12) (cid:21) (cid:27) (cid:12) For m′ ∈Zd+ and n′ ∈Zd+1, (cid:12)(cid:12) sup zm′xn′E ( ∂ )µc (z) ( ∂ )mj−µφ(z;W ,C )( ∂ )njδ (F ) z j z ∞ ∞ x x ∞ − · − − z,x (cid:12) h i(cid:12) ≤ sup(cid:12)(cid:12)k(−∂z)µcj(z)kℓ,p kzm′(−∂z)mj−µφ(z;W∞,C∞)kℓ,p1 kF∞n′kℓ,p2 k(−(cid:12)(cid:12)∂x)njδx(F∞)k−ℓ,p3 z,x < sup (1+ z )Cℓ,p (1+ z )−Cℓ,p 1 ∼ z,x{ | | · | | · } < , (6) ∞ 3Thenumberℓ0=2[(n+d1+2)/2] ispossibletouseforthisℓ0 ifweregard∂njδx asaSchwartzdistribution. 5 where p,p ,p ,p > 1 with p−1+p−1+p−1+p−1 = 1. Consequently, we can apply the Fourier transform to 1 2 3 1 2 3 obtain E ς(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) (u,v) (z,x) z x ∞ ∞ x ∞ F (cid:20)(cid:20) (cid:20) (cid:26) (cid:27)(cid:21)(cid:21)(cid:21) = exp(iu z+iv x)E ς(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) dzdx z x ∞ ∞ x ∞ · · Z (cid:20) (cid:26) (cid:27)(cid:21) = E exp(iu z+iF [v])φ(z;W ,C )ς(z,iu,iv)dz . (7) ∞ ∞ ∞ (cid:20)ZRd · (cid:21) More generally, duality argument also yields ς(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) (u,v) = ς(z,iu,iv)φ(z;W ,C )eiu·F∞ (u). (z,x) z x ∞ ∞ x ∞ z ∞ ∞ F F (cid:20)(cid:20) (cid:26) (cid:27)(cid:21)(cid:21) (cid:20)(cid:20) (cid:21)(cid:21) 4 Asymptotic expansion formula Nondegeneracy of the targeted distribution is indispensable for asymptotic expansion. However, the complete nondegeneracy,thatimpliesabsolutecontinuity,isnotnecessary,norcanweassumeinstatisticalinference. For example, the maximum likelihood estimator does not admit a density in general; even existence of itself is not ensured on the whole probability space. Besides, the complete nondegeneracy is often hard to prove although it would be possible, and this restricts applications of the result. On the other hand, partial nondegeneracy is easier to work with. There the localization method plays an essential role. The localization is realized through a sequence of -measurable truncation functionals ξ . In applications, we construct a suitable ξ to validate n n F necessary nondegeneracy of the functional in question. We will assume the following conditions.4 [B2]ℓ (i) F∞ Dℓ+1,∞(Rd1), W∞ Dℓ+1,∞(Rd) and C∞ Dℓ,∞(Rd Rd). ∈ ∈ ∈ ⊗ (ii) Mn Dℓ+1,∞(Rd), Fn Dℓ+1,∞(Rd1), Wn Dℓ+1,∞(Rd), Cn Dℓ,∞(Rd Rd), Nn Dℓ+1,∞(Rd) and∈ξ D (R). More∈over, ∈ ∈ ⊗ ∈ n ℓ,∞ ∈ ◦ ◦ ◦ sup Mn ℓ+1,p+ Cn ℓ,p+ Wn ℓ+1,p+ Fn ℓ+1,p+ Nn ℓ+1,p+ ξn ℓ,p < n∈N k k k k k k k k k k k k ∞ n o for every p 2. ≥ [B3] (i) lim P ξ 1 =1. n→∞ | n|≤ 2 (ii) C C (cid:2) >r1−a(cid:3)implies ξ 1, where a (0,1/3) is a constant. | n− ∞| n | n|≥ ∈ (iii) For every p 2, ≥ limsupE 1 ∆−p < n→∞ {|ξn|≤1} (Mn+W∞,F∞) ∞ h i and moreover detC−1 Lp. ∞ ∈∩p≥2 We have ∆−1 Lp by Fisher’s inequality and Fatou’s lemma. F∞ ∈∩p≥2 We will give an asymptotic expansion formula to approximate the joint distribution of (Z ,F ). Let n n 1 σ(z,iu,iv) = C˜ (z)j,k(iu )(iu )+W˜ (z)j(iu ) ∞ j k ∞ j 2 +N˜ (z)j(iu )+F˜ (z)l(iv ) (8) ∞ j ∞ l 4[B2]ℓ(i),(ii)are[S]ℓ,m,n(i),(ii)of[31],respectively. [B3](i),(ii),(iii)are[S]ℓ,m,n(iii),(iv),(v)of[31],respectively. 6 for u Rd and v Rd1.5 ∈ ∈ The symbol (8) is the adaptive random random symbol, which corresponds to the second-order correction term of the asymptotic expansion in the normal limit ([28]). In the mixed normal limit case, we need an additional one referred to as the anticipative random symbol and denoted by σ(z,iu,iv). As will be seen in applications in this paper, the anticipative random symbol is given by the Malliavin derivatives, which shows non-classical nature of the present asymptotic expansion beyond the standard invariance principle. We shall define the anticipative random symbol. Let 1 Ψ (u,v) = exp iW [u] C [u⊗2]+iF [v] . ∞ ∞ ∞ ∞ − 2 (cid:26) (cid:27) Moreover,set Ln(u)=en(u) 1 with t t − 1 en(u) = exp iMn[u]+ Cn[u⊗2] . t t 2 t (cid:18) (cid:19) WZdˇe,wdˇr=ited\∂+αd=. iL−e|tα|ψ∂α,C\∂uα∞1(=R;i[−0,|α11]|)∂suαu1c,h\∂tvαh2a=tψi(−x|α)2=|∂vα12ifanxd \∂α1/=2\∂auαn1d\∂ψvα(2xf)or=t0heifmxulti-i1ndanexdαlet=ψ(α=1,αψ2(ξ) ∈) fo+r n N. Fu1rthermor∈e, let | |≤ | |≥ n n ∈ Φ2,α(u,v) = ∂αE[Ln(u)Ψ (u,v)ψ ]. n \ 1 ∞ n If ψ is equal to one, in fact it is usually so without large deviation probability, and W , C and F are n ∞ ∞ ∞ constants,asthisisthecaseinthenormallimitcase,thenΦ2,α =0sinceLn(u)becomesameanzeromartingale. n HoweverΦ2,α does not vanishin general. We may intuitively saythat Φ2,α measuresthe torsionof martingales n n under the shift of measure P by Ψ (u,v). The effect of this torsion appears quite differently, depending on ∞ the cases. Thus, we will treat the effect in a slightly abstract shape for a while. The adaptive random symbol describes it as (ii) of the following condition. 6 Let ℓ =2[d /2]+4. ∗ 1 [B4]ℓ,m,n (i) σ (ℓ∗,2,1). 7 ∈S (ii) There exists a random symbol σ (ℓ,m,n) admitting a representation ∈S σ(iu,iv) = c (iu)mj(iv)nj (finite sum) j j X for some random variables c and satisfying j lim r−1Φ2,α(u,v) = ∂αE exp iu z+iF [v] φ(z;W ,C )dzσ(iu,iv) n→∞ n n \ (cid:20)ZRd · ∞ ∞ ∞ (cid:21) (cid:0) 1 (cid:1) = ∂αE exp iW [u] C [u⊗2]+iF [v] σ(iu,iv) (9) ∞ ∞ ∞ \ − 2 (cid:20) (cid:18) (cid:19) (cid:21) for u∈Rd, v ∈Rd1 and α=(α1,α2)∈Zd+ˇ. Set Φ˜2,α(u,v)=lim r−1Φ2,α(u,v). n→∞ n n Remark 1. It is also possible to consider a more general random symbol σ¯(z,x,iu,iv). Remark 2. As mentioned above, Φ˜2,α vanishes in the classical case of deterministic Ψ (u,v) thanks to the ∞ local martingale property of Ln(u). Non-vanishing case will appear later. t 5Einstein’sruleforrepeated indices. 6[B4]ℓ,m,n(i),(ii)are[S]ℓ,m,n(vi),(vii)of[31]. 7Thenumberℓ∗=2[(d1+3)/2] ispossibleasthepreviousfootnote. 7 The full random symbol for the second-order terms is σ := σ+σ¯. (10) In order to approximate the joint local density of (Z ,F ), we use the density function n n p (z,x) = E φ(z;W ,C ) F =x pF∞(x) n ∞ ∞ ∞ (cid:20) (cid:12) (cid:21) (cid:12) +rnE σ(z,∂z,∂x(cid:12)(cid:12))∗ φ(z;W∞,C∞) F∞ =x pF∞(x) . (cid:20) (cid:26) (cid:12) (cid:21) (cid:27) (cid:12) (cid:12) Notethatpn(z,x)iswelldefinedunder[B2]ℓ,[B3]and[B4]ℓ,m,n whenℓ(cid:12) ℓ0. Weshouldremarkthatpn(z,x)is ≥ writtenintermsoftheconditionalexpectationgivenF . ThissuggeststhatconditioningbyF orequivalently ∞ ∞ by F under truncation is essentially used in validation of the formula. With Watanabe’s delta functional, we n can write p (z,x) = E φ(z;W ,C )δ (F ) n ∞ ∞ x ∞ (cid:20) (cid:21) +r E σ(z,∂ ,∂ )∗ φ(z;W ,C )δ (F ) . (11) n z x ∞ ∞ x ∞ (cid:20) (cid:26) (cid:27)(cid:21) For M,γ > 0, let (M,γ) denote the set of measurable functions f : Rdˇ R satisfying f(z,x) M(1+ E → | | ≤ z + x)γ. Let | | | | ∆ (f) = E f(Z ,F ) f(z,x)p (z,x)dzdx . n n n n − (cid:12) Z (cid:12) (cid:12) (cid:2) (cid:3) (cid:12) Let Λ0n(d,q)={u∈Rd;|u|≤rn−q}, wher(cid:12)(cid:12)e q =(1−a)/2∈(1/3,1/2). Let (cid:12)(cid:12) 1 ǫ(k,n) = max r r−1Φ2,α(u,v) Φ˜2,α(u,v) dudv. α:|α|≤k(2π)dˇ nZΛ0n(dˇ,q) n n − (cid:12) (cid:12) (cid:12) (cid:12) The following theorem gives an extension of Theorem 4 of [28]. Theorem 1. Let ℓ = ℓ0 (dˇ+3). Suppose that [B1], [B2]ℓ, [B3] and [B4]ℓ,m,n are fulfilled. Let M,γ (0, ) ∨ ∈ ∞ and θ (0,1) be arbitrary numbers. Then ∈ (a) there exist constants C =C(M,γ,θ) and C =C(M,γ) such that 1 2 1 θ sup ∆ (f) C P ξ > +C ǫ([γ+dˇ]+1,n)+o(r ) n 1 n 2 n ≤ | | 2 f∈E(M,γ) h i as n . →∞ (b) If sup sup r−1 (u,v)dˇ+1−ǫ Φ2,α(u,v) < (12) n | | | n | ∞ n (u,v)∈Λ0(dˇ,q) n for every α Zdˇ and some ǫ=ǫ(α) (0,1), then for some constant C =C(M,γ,θ), ∈ + ∈ 1 1 θ sup ∆ (f) C P ξ > +o(r ) (13) n 1 n n ≤ | | 2 f∈E(M,γ) h i as n . →∞ In Theorem 1, m is some number, which puts restriction when (12) is verified. Next, we shall present a version of Theorem 1. Let s be a positive random variable on Ω. n 8 [B2′]ℓ (i) F∞ Dℓ+1,∞(Rd1), W∞ Dℓ+1,∞(Rd) and C∞ Dℓ,∞(Rd Rd). ∈ ∈ ∈ ⊗ (ii) Mn Dℓ+1,∞(Rd), Fn Dℓ+1,∞(Rd1), Wn Dℓ+1,∞(Rd), Cn Dℓ,∞(Rd Rd), Nn Dℓ+1,∞(Rd) and∈s D (R). Mor∈eover, ∈ ∈ ⊗ ∈ n ℓ,∞ ∈ ◦ ◦ ◦ sup Mn ℓ+1,p+ Cn ℓ,p+ Wn ℓ+1,p+ Fn ℓ+1,p+ Nn ℓ+1,p+ sn ℓ,p < n∈N k k k k k k k k k k k k ∞ n o for every p 2. ≥ [B3′] (i) P ∆ <s =O(r1+κ) as n for some κ>0. (Mn+W∞,F∞) n n →∞ (ii) Fo(cid:2)r every p 2, (cid:3) ≥ limsupE s−p < n ∞ n→∞ (cid:2) (cid:3) and moreover detC−1 Lp. ∞ ∈∩p≥2 Theorem 2. Let ℓ = ℓ0 (dˇ+3). Suppose that [B1], [B2′]ℓ, [B3′] and [B4]ℓ,m,n are fulfilled. Then for any ∨ M,γ (0, ), ∈ ∞ (a) there exists a constant C such that sup ∆ (f) Cǫ([γ+dˇ]+1,n)+o(r ) n n ≤ f∈E(M,γ) as n . →∞ (b) If (12 ) is satisfied, then sup ∆ (f) = o(r ) n n f∈E(M,γ) as n . →∞ The above results can apply to the expansion of the distribution of functions of (Z ,F ), in particular, n n Z /F and Z /√F . n n n n 5 Proof of Theorems 1 and 2 5.1 Decomposition of the joint characteristic function Let Zˇn = (Zn,Fn). Then according to the notation of the multi-index, Zˇnα = Znα1Fnα2 for α = (α1,α2) ∈ Zd Zd1 =Zdˇ, dˇ=d+d . Let +× + + 1 gˆα(u,v) = E ψ Zˇαexp(iZ [u]+iF [v]) n n n n n for u Rd and v Rd1. We denote (cid:2) (cid:3) ∈ ∈ 1 gα(z,x) = e−iu·z−iv·xgˆα(u,v)dudv n (2π)dˇZRdˇ n if the integral on the right-hand side exists. Let 1 e◦ (x) = esxds, Z0 9 then ex =1+xe◦ (x). For 1 Ψ (u,v) = exp iW [u] C [u⊗2]+iF [v] , n n n n − 2 (cid:26) (cid:27) we have ǫ (u,v) = log Ψ (u,v)Ψ (u,v)−1 +ir N [u] n n ∞ n n 1 = (cid:0)(C C )[u⊗2]+i(F(cid:1) [v] F [v]) n ∞ n ∞ −2 − − +i(W [u] W [u])+ir N [u]. n ∞ n n − Then we have gˆα(u,v) = ∂αE[Ψ (u,v)ψ ]+∂αE en(u)Ψ (u,v)ǫ (u,v)e◦ (ǫ (u,v))ψ n \ ∞ n \ 1 ∞ n n n +∂αE[Ln(u)Ψ (u,v)ψ ]h i \ 1 ∞ n =: Φ0,α(u,v)+Φ1,α(u,v)+Φ2,α(u,v). (14) n n n Set Bn(u,v) = en(u)Ψ (u,v). t t ∞ Then 1 1 Bn(u,v) = exp iMn[u]+ Cn[u⊗2] exp iW [u] C [u⊗2]+iF [v] t t 2 t ∞ − 2 ∞ ∞ (cid:18) (cid:19) (cid:26) (cid:27) 1 = exp iMn[u] exp iW [u]+iF [v] exp (Cn C )[u⊗2] . t ∞ ∞ 2 t − ∞ (cid:18) (cid:19) (cid:0) (cid:1) (cid:0) (cid:1) Condition [B3] (ii) implies 1 1 Bn(u,v) = exp (Cn Cn)[u⊗2] exp (C C )[u⊗2] | t | 2 t − 1 2 n− ∞ (cid:18) (cid:19) (cid:18) (cid:19) 1 exp r1−a u2 (15) ≤ 2 n | | (cid:18) (cid:19) and 1 Reǫ (u,v) r1−a u2 (16) n ≤ 2 n | | whenever ψ >0. n We shall consider the second-order term of type I. Lemma 1. Suppose that [B1] and [B2] for some ℓ 0 and [B3] (i), (ii) 8 are fulfilled. Then the limit ℓ ≥ Φ˜1,α(u,v) = lim r−1Φ1,α(u,v) n n n→∞ exists and takes the form Φ˜1,α(u,v) = ∂αE exp(iu z+iF [v]) ∞ \ hZRd · 1 C˜ (z)[u⊗2]+iW˜ (z)[u]+iN˜ (z)[u]+iF˜ (z)[v] φ(z;W ,C )dz ∞ ∞ ∞ ∞ ∞ ∞ · −2 (cid:16) (cid:17) i for every α Zdˇ. ∈ + 8Itsufficestoassume[R1]and[R2]of[31]. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.