ebook img

A note on Malliavin fractional smoothness for Lévy processes and approximation PDF

0.28 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A note on Malliavin fractional smoothness for Lévy processes and approximation

A NOTE ON MALLIAVIN FRACTIONAL SMOOTHNESS FOR LE´VY PROCESSES AND APPROXIMATION CHRISTEL GEISS, STEFAN GEISS, AND EIJA LAUKKARINEN 2 1 0 Abstract. Assume a L´evy process (Xt)t∈[0,1] that is an L2- 2 martingale and let Y be either its stochastic exponential or X n itself. For certain integrands ϕ we investigate the behavior of a N J 1 ϕtdXt− vk−1(Ytk −Ytk−1) , (cid:13)(cid:13)Z(0,1] kX=1 (cid:13)(cid:13)L2 R] where vk−1 i(cid:13)(cid:13)s Ftk−1-measurable, in dependence o(cid:13)(cid:13)n the fractional smoothness in the Malliavin sense of ϕ dX . A typical situa- (0,1] t t P tion where these techniques apply occurs if the stochastic integral . R h is obtained by the Galtchouk-Kunita-Watanabe decomposition of at some f(X1). Moreover, using the example f(X1) = 1(K,∞)(X1) m we showhow fractionalsmoothnessdepends onthe distribution of the L´evy process. [ 1 v 9 1. Introduction 8 3 0 We consider the quantitative Riemann approximation of stochastic . integrals driven by L´evy processes and its relation to the fractional 1 0 smoothness in the Malliavin sense. Besides the interest on its own, 2 the problem is of interest for numerical algorithms and for Stochastic 1 v: Finance. To explain the latter aspect, assume a price process (St)t∈[0,1] given under der martingale measure by a diffusion i X t r S = s + σ(S )dW , a t 0 r r Z0 where W is the Brownian motion and where usual conditions on σ are imposed. For a polynomially bounded Borel function f : R R we → obtain a representation 1 (1) f(S ) = V + ϕ dS 1 0 t t Z0 where (ϕ ) is a continuous adapted process which can be obtained t t∈[0,1) via the gradient of a solution to a parabolic backward PDE related to ThesecondandthirdauthoraresupportedbytheProject133914oftheAcademy of Finland. 1 2 CHRISTEL GEISS,STEFAN GEISS, ANDEIJA LAUKKARINEN σ with terminal condition f. The process (ϕ ) is interpreted as t t∈[0,1) a trading strategy. In practice one can trade only finitely many times which corresponds to a replacement of the stochastic integral in (1) by the sum N ϕ (S S ) with 0 = t < t < < t = 1. The k=1 tk−1 tk − tk−1 0 1 ··· N error P 1 N (2) ϕ dS ϕ (S S ) t t − tk−1 tk − tk−1 Z0 k=1 X caused by this replacement is often measured in L and has been stud- 2 ied by various authors, for example by Zhang [21], Gobet and Temam [11], S. Geiss [8], S. Geiss and Hujo [9] and C. Geiss and S. Geiss [7]. For results concerning L with p (2, ) we refer to [20], the weak p ∈ ∞ convergence is considered in [10] and [19] and by other authors. In par- ticular, if S is the Brownian motion or the geometric Brownian motion, S. Geiss and Hujo investigated in [9] the relation between the Malliavin fractional smoothness of f(S ) and the L -rate of the discretization er- 1 2 ror (2). It is natural to extend these results to L´evy processes. A first step was done by M. Brod´en and P. Tankov [5] (see Remark 4.11). The aim of this paper is to extend results of [9] into the following directions: (a) The Brownian motion and the geometric Brownian motion are gen- eralized to L´evy processes (X ) that are L -martingales and their t t∈[0,1] 2 Dol´eans-Dade exponentials S = (X), E S = 1+ S dX , t u− u Z(0,t] respectively. For certain stochastic integrals F = ϕ dX s− s Z(0,1] and for Y X, (X) we study the connection of the Malliavin frac- ∈ { E } tional smoothness of F (introduced by the real interpolation method) and the behavior of N (3) aopt(F;(t )N ) = inf F v (Y Y ) , Y k k=0 − k−1 tk − tk−1 (cid:13) (cid:13) (cid:13) Xk=1 (cid:13)L2 (cid:13) (cid:13) where the infimum is taken(cid:13) over -measurable v(cid:13) such that (cid:13) Ftk−1 (cid:13)k−1 Ev2 (Y Y )2 < and where 0 = t < < t = 1 is a k−1 tk − tk−1 ∞ 0 ··· N deterministic time-net. (b) In contrast to [9], where the reduction of the stochastic approxima- tion problem to a deterministic one is based on Itˆo’s formula and was FRACTIONAL SMOOTHNESS, LE´VY PROCESSES AND APPROXIMATION 3 done in [8, 7], we prove an analogous reduction in Theorems 3.3 and 3.4 by techniques based on the Itˆo chaos decomposition. (c) One more principal difference to [9] is the fact that L´evy pro- cesses do in general not satisfy the representation property and there- fore there are F L that cannot be approximated by sums of 2 ∈ the form N v (Y Y ) in L . As a consequence we have k=1 k−1 tk − tk−1 2 to use the (orthogonal) Galtschouk-Kunita-Watanabe projection that P projects L onto the subspace I(X) of stochastic integrals λ dX 2 (0,1] s s with E 1 λ 2ds < that can be defined in our setting as the L - 0 | s| ∞ R 2 closure of R (4) N 0 = a < < a = 1 0 N v (X X ) : v L ( ), ··· ak−1 ak − ak−1 ak−1 ∈ 2 Fak−1 N = 1,2,... ( ) k=1 X to deal with our approximation problem. The paper is organized as follows. In Section 2 we recall some facts about real interpolation and L´evy processes. In Section 3 we investi- gate the discrete time approximation. The basic statement is Theorem 3.3 that reduces the stochastic approximation problem to a determin- istic one in case of the Riemann-approximation (2) (which we call sim- ple approximation in the sequel). The difference between the simple and optimal approximation (3) is shown in Theorem 3.4 to be suffi- ciently small. Theorem 3.5 provides a lower bound for the optimal L -approximation. Finally, Theorems 3.6 and 3.8 give the connection 2 to the Besov spaces defined by real interpolation. We conclude with Section 4 where we use the example f(x) = 1 (x) to demonstrate (K,∞) howthefractionalsmoothness dependsontheunderlying L´evyprocess. 2. Preliminaries 2.1. Notation. Throughout this paper we will use for A,B,C 0 ≥ and c 1 the notation A B for 1B A cB and A = B C for ≥ ∼c c ≤ ≤ ± B C A B+C.The phrasec`adl`agstandsfor apathwhich isright- − ≤ ≤ continuous and has left limits. Given q [1, ], the sequence space ℓ q ∈ ∞ consists of all α = (α ) R such that α := ( ∞ α q)1/q < N N≥1 ⊆ k kℓq N=1| N| for q < and α := sup α < , respectively. ∞ ∞ k kℓ∞ N≥1| N| ∞ P 2.2. Real interpolation. First we recall some facts about the real interpolation method. 4 CHRISTEL GEISS,STEFAN GEISS, ANDEIJA LAUKKARINEN Definition 2.1. ForBanachspaces X X , where X iscontinuously 1 0 1 ⊆ embedded into X , we define for u > 0 the K-functional 0 K(u,x;X ,X ) := inf x +u x . 0 1 x=x0+x1{k 0kX0 k 1kX1} For θ (0,1) and q [1, ] the real interpolation space (X ,X ) 0 1 θ,q ∈ ∈ ∞ consists of all elements x X such that x < where ∈ 0 k k(X0,X1)θ,q ∞ ∞[u−θK(u,x;X ,X )]qdu 1q, q [1, ) 0 0 1 u ∈ ∞ x := k k(X0,X1)θ,q  (cid:2)R (cid:3)  supu>0u−θK(u,x;X0,X1), q = ∞.  The spaces (X ,X ) equipped with become Banach 0 1 θ,q k · k(X0,X1)θ,q spaces and form a lexicographical scale, i.e. for any 0 < θ < θ < 1 1 2 and q ,q [1, ] it holds that 1 2 ∈ ∞ X (X ,X ) (X ,X ) (X ,X ) X . 0 ⊇ 0 1 θ1,q1 ⊇ 0 1 θ2,q2 ⊇ 0 1 θ2,min{q1,q2} ⊇ 1 For more information the reader is referred to [3, 4]. 2.3. The spaces Bθ (E). 2,q Definition 2.2. For a sequence of Banach spaces E = (E )∞ with n n=0 E = 0 we let ℓ (E) and d (E) be the Banach spaces of all a = n 2 1,2 6 { } (a )∞ E such that n n=0 ∈ 1 1 ∞ 2 ∞ 2 a := a 2 and a := (n+1) a 2 , k kℓ2(E) k nkEn k kd1,2(E) k nkEn ! ! n=0 n=0 X X respectively, are finite. Moreover, for θ (0,1) and q [1, ] we let ∈ ∈ ∞ (ℓ (E),d (E)) : θ (0,1),q [1, ] Bθ (E) := 2 1,2 θ,q ∈ ∈ ∞ . 2,q d (E) : θ = 1,q = 2 1,2 (cid:26) It can be shown that (cf. [9, Remark A.1]) ∞ a 2 (n+1)θ a 2 . k kBθ2,2(E) ∼c2θ k nkEn n=0 X To describe the interpolation spaces Bθ (E) we use two types of func- 2,q tions. The first one is a generating function for ( a 2 )∞ , i.e. for k nkEn n=0 a = (a )∞ ℓ (E) we let n n=0 ∈ 2 ∞ T (t) := a 2 tn. a k nkEn n=0 X FRACTIONAL SMOOTHNESS, LE´VY PROCESSES AND APPROXIMATION 5 The second function will be used to describe our stochastic approxima- tion in a deterministic way: For a ℓ (E) and a deterministic time-net 2 ∈ τ = (t )N with 0 = t t = 1 we let k k=0 0 ≤ ··· ≤ N N tk 12 A(a,τ) := (t t)(T )′′(t)dt . k a − (cid:18)k=1Ztk−1 (cid:19) X For the formulation of the next two theorems which will connect ap- proximation properties with fractional smoothness special time nets are needed. Given θ (0,1] and N 1, we let τθ be the time-net ∈ ≥ N 1 k θ (5) tN,θ := 1 1 for k = 0,1,...,N k − − N (cid:18) (cid:19) for which one has (see [10, relation (4)]) tN,θ t tN,θ tN,θ 1 (6) | k − | | k − k−1| for k = 1,...,N (1 t)1−θ ≤ (1 tN,θ)1−θ ≤ θN − − k−1 and t [tN,θ,tN,θ). For θ = 1 we obtain equidistant time-nets. The ∈ k−1 k following two theorems are taken from [9]. For the convenience of the reader we comment about the proofs in Remark 2.5 below. Theorem 2.3 ([9]). For θ (0,1), q [1, ] and a = (a )∞ ℓ (E) ∈ ∈ ∞ n n=0 ∈ 2 one has kakBθ2,q(E) ∼c kakℓ2(E) + N2θ−1qA(a,τN1) ∞N=1 ℓq (cid:13)(cid:16) (cid:17) (cid:13) where c [1, ) depends at most o(cid:13)n (θ,q) and the expre(cid:13)ssions may be ∈ ∞ (cid:13) (cid:13) infinite. Theorem 2.4 ([9]). For θ (0,1] and a = (a )∞ ℓ (E) the follow- ∈ n n=0 ∈ 2 ing assertions are equivalent: (i) a Bθ (E). ∈ 2,2 (ii) 1(1 t)1−θT′′(t)dt < . 0 − F ∞ (iii) There exists a constant c > 0 such that R c A(a,τθ) for N = 1,2,... N ≤ √N Remark 2.5. We fix a = (a )∞ ℓ (E) and (θ,q) according to n n=0 ∈ 2 Theorems 2.3 and 2.4. Then we let β := a and define f = ∞ β h L (R,γ), where γ is the stanndardkGnkaEunssian measure and n=0 n n ∈ 2 (h )∞ the orthonormal basis of Hermite polynomials. As before, let n n=0 P N tk 12 ∞ A(β,τ) := (t t)(T )′′(t)dt with T (t) := β2tn. k − β β n (cid:18)k=1Ztk−1 (cid:19) n=0 X X 6 CHRISTEL GEISS,STEFAN GEISS, ANDEIJA LAUKKARINEN Omitting the notation (E) in the case E = (R,R,...), we have a = β and a = β . Moreover, [9, Theorem 2.2] k kℓ2(E) k kℓ2 k kd1,2(E) k kd1,2 gives that a β for θ (0,1) and q [1, ] because k kBθ2,q(E) ∼c(θ,q) k kBθ2,q ∈ ∈ ∞ of T = T . Hence [9, Lemmas 3.9 and 3.10, Theorem 3.5 (X=W)] a β imply Theorem 2.3 of this paper. The equivalence of (i) and (iii) of Theorem 2.4 follows in the same way by [9, Lemmas 3.9 and 3.10, The- orem 3.2 (X=W)]. Finally, the equivalence of (i) and (ii) of Theorem 2.4 is a consequence of the proof of [9, Theorem 3.2 (X=W)]. 2.4. L´evy processes. We follow the setting and presentation of [17, Section 1.1] and assume a square integrable mean zero L´evy process X = (X ) on a stochastic basis (Ω, ,P,( ) ) satisfying t t∈[0,1] t t∈[0,1] the usual assumptions, i.e. (Ω, ,P) is comFplete Fwhere the filtration F ( ) is the augmented natural filtration of X and therefore right- t t∈[0,1] F continuous and := is assumed without loss of generality. The 1 F F L´evy measure ν with ν( 0 ) = 0 satisfies { } x2ν(dx) < ∞ R Z by the square integrability of X (see [16, Theorem 25.3]). Let N be the associated Poisson random measure and dN˜(t,x) = dN(t,x) dtdν(x) − be the compensated Poisson random measure. The L´evy-Itoˆ decompo- sition (see [16, Theorem 19.2]) can be written under our assumptions as X = σW + xN˜(ds,dx). t t Z(0,t]×R\{0} We introduce the finite measures µ on (R) and on ([0,1] R) by B m B × µ(dx) := σ2δ (dx)+x2ν(dx), 0 (dt,dx) := dtµ(dx), m whereweagreeaboutµ(R) > 0toavoidpathologies. ForB ((0,1] R ∈ B × ) we define the random measure M(B) := σ dW + xN˜(dt,dx) t Z{t∈(0,1]:(t,0)∈B} ZB∩((0,1]×(R\{0})) and let Ln := L (([0,1] R)n, (([0,1] R)n), ⊗n) for n 1. 2 2 × B × m ≥ By [12, Theorem 2] there is the chaos decomposition ∞ L := L (Ω, ,P) = I (Ln), 2 2 F n 2 n=0 M where I (L0) is the space of the a.s. constant random variables and 0 2 I (Ln) := I (f ) : f Ln for n = 1,2,... and I (f ) denotes n 2 { n n n ∈ 2} n n FRACTIONAL SMOOTHNESS, LE´VY PROCESSES AND APPROXIMATION 7 the multiple integral w.r.t. the random measure M. For properties of the multiple integral see [12, Theorem 1]. Especially, I (f ) 2 = k n n kL2 n! f˜ 2 and k nkLn2 ∞ F 2 = n! f˜ 2 k kL2 k nkLn2 n=0 X ˜ with f being the symmetrization of f , i.e. n n 1 ˜ f (z ,...,z ) = f (z ,...,z ) n 1 n n π(1) π(n) n! for all z = (t ,x ) [0,1] R, wXhere the sum is taken over all permu- i i i ∈ × tations π of 1,...,n . For F L the L -representation 2 2 { } ∈ ∞ ˜ F = I (f ), n n n=0 X with I (f ) = EF a.s. is unique (note that I (f ) = I (f˜ ) a.s.). 0 0 n n n n 2.5. Besov spaces. Here we recall the construction of Besov spaces (or spaces of random variables of fractional smoothness) based on the above chaos expansion. Definition 2.6. Let D be the space of all F = ∞ I (f ) L 1,2 n=0 n n ∈ 2 such that ∞ P F 2 := (n+1) I (f ) 2 < . k kD1,2 k n n kL2 ∞ n=0 X Moreover, (L ,D ) : θ (0,1),q [1, ] Bθ2,q := 2 1,2Dθ,q : θ =∈ 1,q = 2∈ ∞ . 1,2 (cid:26) 2.6. The space of the random variables to approximate. We M will approximate random variables from the following space : Definition 2.7. The closed subspace M L consists of all mean zero 2 ⊆ F L such that there exists a representation 2 ∈ ∞ F = I (f ) n n n=1 X with symmetric f such that there are h R and symmetric h n 0 n ∈ ∈ L (µ⊗n) for n 1 with 2 ≥ f ((t ,x ),...,(t ,x )) = h (x ,...,x ) for 0 < t < < t < 1. n 1 1 n n n−1 1 n−1 1 n ··· The orthogonal projection onto M is denoted by Π : L M L . 2 2 → ⊆ 8 CHRISTEL GEISS,STEFAN GEISS, ANDEIJA LAUKKARINEN M Let us summarize some facts about the space : (a) Representation of Π. For ∞ G = I (α ) L n n 2 ∈ n=0 X with symmetric α Ln one computes the functions h of the projec- n ∈ 2 n tion F = Π(G) by h (x ,...,x ) n−1 1 n−1 1 tn−1 t2 = n! ... α ((t ,x ),...,(t ,x ),(t ,x )) n 1 1 n−1 n−1 n n Z0 Z0 Z0 ZR µ(dx ) n (7) dt dt for n 1. × µ(R) 1··· n ≥ (b) Integral representation of the elements of M. Given F M ∈ with a representation like in Definition 2.7 (the functions h are unique n as elements of L (µ⊗n)), we define the martingale ϕ = (ϕ ) by the 2 t t∈[0,1) L -sum 2 ∞ (8) ϕ := h + (n+1)I h 1⊗n , t 0 n n (0,t] Xn=1 (cid:16) (cid:17) which we will assume to be path-wise c`adl`ag. It follows that ∞ ϕ 2 = h2 + (n+1)2n!tn h 2 k tkL2 0 k nkL2(µ⊗n) n=1 X ∞ 1 = h2 + (n+1)2n!tn f 2 0 µ(R) k n+1kLn2+1 n=1 X ∞ 1 = h2 + tn(n+1) I (f ) 2 0 µ(R) k n+1 n+1 kL2 n=1 X so that ∞ (9) µ(R) sup ϕ 2 + F 2 = (n+1) I (f ) 2 . k tkL2 k kL2 k n n kL2 t∈[0,1) n=0 X Moreover, for t [0,1] we get that, a.s., ∈ F := E(F ) = ϕ dX . t t s− s |F Z(0,t] This is analog to the Brownian motion case considered in [7] and [9], where the representation F = EF + ϕ dB was used together with (0,1] s s R FRACTIONAL SMOOTHNESS, LE´VY PROCESSES AND APPROXIMATION 9 the regularity assumption that (ϕ ) is a martingale or close to a s s∈[0,1) martingale in some sense. (c) Basic examples for elements for M are taken from Lemma 4.2 below: Let Π : L I(X) L be the orthogonal projection X 2 2 onto I(X) defined in (4)→and let f⊆: R R be a Borel function with → f(X ) L , then 1 2 ∈ Π (f(X )) = Π(f(X )). X 1 1 M This means the elements of occur naturally when applying the Galtchouk-Kunita-Watanabe projection. It should be noted, that in the case that σ = 0 and ν = αδ with α > 0 and x R 0 we have a chaos decomposition of the forxm0 f(X ) = Ef(X )0+∈ ∞\{β}I (1⊗n ) 1 1 n=1 n n (0,1] with β R, so that already f(X ) M. n ∈ 1 ∈ P 2.7. Dol´eans-Dade stochastic exponential. Definition 2.8. For 0 a t 1 we let ≤ ≤ ≤ ∞ I (1⊗n ) Sa := 1+ n (a,t] , t n! n=1 X where we can assume that all paths of (Sa) are c`adla`g for any t t∈[a,1] fixed a [0,1]. In particular, we let S = (S ) := (S0) . ∈ t t∈[0,1] t t∈[0,1] The following lemma is standard and we omit its proof. Lemma 2.9. For 0 a t 1 one has that ≤ ≤ ≤ (i) Sa = 1+ Sa dX a.s., t (a,t] u− u (ii) S = SaS a.s., t t aR (iii) Sa is independent from and E(Sa)2 = eµ(R)(t−a). t Fa t 3. Approximation of stochastic integrals In the sequel we will use ∞ := τ = (t )N : 0 = t < < t = 1 and := TN { k k=0 0 ··· N } T TN N=1 [ assets ofdeterministic time-nets anddefine τ := max t t . 1≤k≤N k k−1 | | | − | We will consider the following approximations of a random variable F M with respect to the processes X and S: ∈ 10 CHRISTEL GEISS,STEFAN GEISS, ANDEIJA LAUKKARINEN Definition 3.1. For N 1, Y X,S , F = ϕ dX M, ≥ ∈ { } (0,1] s− s ∈ A = (A )N and τ we let k k=1 ⊆ F ∈ TN R (i) asim(F;τ,A) := F N ϕ 1 (Stk−1 1) , S − k=1 tk−1 Ak tk − L2 (ii) aopt(F;τ) := inf(cid:13)(cid:13) F P N v (Y Y ) (cid:13)(cid:13), where the infi- mYum is taken ov(cid:13)e(cid:13)r a−ll k=1-mke−a1surtkab−le vtk−1 (cid:13):L(cid:13)Ω2 R such that (cid:13) PFtk−1 k−1(cid:13) → E v (Y Y (cid:13)) 2 < . (cid:13) | k−1 tk − tk−1 | ∞ Remark 3.2. (i) The definition of asim takes into account the addi- S tional sets (A )N to avoid problems with the case that S van- k k=1 ishes. These extra sets A in asim(F;τ,A) play different roles in S Theorem 3.3, Theorem 3.4, and in Theorems 3.5, 3.6 and 3.8. To recover a more standard form of asim assume that (S ) and S t t∈[0,1] (S ) are positive so that we can write t− t∈[0,1] ϕ u F = ψ (S dX ) with ψ := u− u− u u S Z(0,1] u and obtain that N N F ϕ (Stk−1 1) = F ψ S (Stk−1 1) − tk−1 tk − − tk−1 tk−1 tk − k=1 k=1 X X N = F ψ (S S ) − tk−1 tk − tk−1 k=1 X which is what one expects. (ii) In the sequel the crucial assumption will be Ω = S = 0 for all t [0,1]. t { 6 } ∈ This can be achieved by the condition ν(( , 1]) = 0 which −∞ − implies the almost sure positivity of S and we can adjust S on a set of measure zero; see [13, Theorem I.4.61] and [16, Theorem 19.2]. Because of the martingale property of (ϕ ) it is easy to check that t t∈[0,1) N aopt(F;τ) = F ϕ (X X ) X − tk−1 tk − tk−1 (cid:13) (cid:13) (cid:13) Xk=1 (cid:13)L2 (cid:13) (cid:13) so that for Y = X the s(cid:13)imple and optimal approxim(cid:13)ation coincide. (cid:13) (cid:13) The theorem below gives a description of the simple approximation by a function H (t) that describes, in some sense, the curvature of F M Y ∈ with respect to Y.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.