A Complete Representation Theorem for G-martingales 2 1 Shige Peng Yongsheng Song Jianfeng Zhang 0 ∗ † ‡ 2 January 13, 2012 n a J 2 1 Abstract ] R In this paper we establish a complete representation theorem for G-martingales. P Unlike the existing results in the literature,we provide the existence and uniqueness of . h the second order term, which corresponds to the second order derivative in Markovian t a case. Themainingredientofthepaperisanewnormforthatsecondorderterm,which m is based on an operator introduced by Song [13]. [ 1 Key words: G-expectations, G-martingales, martingale representation theorem, nonlinear v expectations 9 2 AMS 2000 subject classifications: 60H10, 60H30 6 2 . 1 1 Introduction 0 2 1 The notion of G-expectation, a type of nonlinear expectation proposed by Peng [6], [7], : v has received very strong attention in the literature in recent years. In Markovian case, i X the G-expectation and the closely related Second Order Backward SDEs introduced by r a Soner, Touzi, and Zhang [11], are associated with fully nonlinear PDEs, see also Peng [8]. Their typical applications include, among others, economic/financial models with volatility uncertainty and numerical methods for high dimensional fully nonlinear PDEs. G-expectation is a typical nonlinear expectation. It can be regarded as a nonlinear generalization of Wiener probability space (Ω, ,P) where Ω = C([0, ),Rd), = (Ω) F ∞ F B ∗SchoolofMathematics,ShandongUniversity,[email protected],ResearchpartiallysupportedbyNSFof China (No. 10921101) and National Basic Research Program of China (973 Program) (No.2007CB814900). †Academy of Mathematics and Systems Science, CAS, Beijing, China, [email protected]. Research supported by the National Basic Research Program of China (973 Program) (No.2007CB814902), Key Lab of Random Complex Structuresand Data Science, Chinese Academyof Sciences (No. 2008DP173182). ‡University of Southern California, Department of Mathematics, [email protected]. Research supported in part by NSFgrant DMS 10-08873. 1 andP isaWienerprobabilitymeasuredefinedon(Ω, ). Recall thattheWienermeasureis F definedsuchthatthecanonicalprocessB (ω):= ω ,t 0isacontinuois processwithstable t t ≥ and independent increments, namely (B ) is a Brownian motion. G-expectation EG is a t t 0 ≥ sublinear expectation on the same canonical space Ω, such that the same canonical process B is a G-Brownian motion, i.e., it is a continuous process with stable and independent increaments. One important feature of this notion is its time consistency. To be precise, let ξ be a random variable and Y := EG[ξ] denote the conditional G-expectation, then one t t has EG[ξ] = EG[EG(ξ)] for any s < t. For this reason, we call the conditional G-expectation s s t a G-martingale, or a martingale under G-expectation. It is well known that a martingale underWienermeasurecanbewrittenasastochasticintegralagainsttheBronwnianmotion. Then a very natural and fundamental question in this nonlinear G-framework is: What is the structure of a G-martingale Y? (1.1) Peng [6] has observed that, for Z 2 and η 1 (see (2.12) and (2.17) below), the ∈ HG ∈ MG following process Y is always a G-martingale: 1 dY = Z dB G(η )dt+ η d B . (1.2) t t t t t t − 2 h i Here G is the deterministic function Peng [6] used to define G-expectations and B is h i the quadratic variation of the G-Brownian motion B. We remark that, in a Markovian framework, we have Y = u(t,B ), where u is a smooth function satisfying the following t t fully nonlinear PDE: ∂ u+G(∂ u) = 0. (1.3) t xx Then Z = ∂ u(t,B ) and η = ∂ u(t,B ). In particular, if ξ = g(B ), then by PDE t x t t xx t T arguments we see immediately that Y := EG[ξ] has a representation (1.2). Peng was even t t p able to prove this (Z,η)-representation holds if ξ is in a dense subspace of (see (2.5) Lip LG below). But observing that is not a complete space, a very interesting question was ip L then raised to give a complete (Z,η)-representation theorem for EG[ξ]. t The first partial answer was provided by Xu and Zhang [14]: if Y is a symmetric G- martingale, that is, both Y and Y are G-martingales, then − dY = Z dB for some process Z. (1.4) t t t However, symmetric G-martingales captures only the linear part in this nonlinear frame- work, and it is essentially important to understand the structure of nonsymmetric G- martingales. By introducing a new norm (see (2.22) below), Soner, Touzi and Zhang [10] L2 k · k G proved a more general representation theorem: for ξ L2, ∈ G dY = Z dB dK , (1.5) t t t t − 2 where K is an increasing process such that K is a G-martingale. It has been proved − independentlyin[10]andSong[12]thatLpG ⊃ q>pLqG,wherek·kLqG isthenormintroduced in [6]. In particular, [12] extended the representation (1.5) to the case p > 1. T Now the questions is, when does the process K in (1.5) have the structure: dK = t G(η )dt 1η d B ? Several efforts have been made in this direction. Hu and Peng [4] t − 2 t h it and Pham and Zhang [9] made some progresses on the existence of η. However, there is no characterization of the process η, and in particular, they do not provide an appropriate norm for η. On the other hand, Song [13] proved the uniqueness of η in the space 1. A MG clever operator was introduced in this work, which successfully isolates the term 1η d B 2 t h it from dK , and thus essentially captures the uncertainty of underlying distributions. This t idea turns out to be the building block of the present paper. Our main contribution of this paper is to introduce a norm for the process η, based on the work [13]. We shall prove the existence, uniqueness, and a priori norm estimates for η. In particular, given ξ and ξ in appropriate space, let (Yi,Zi,ηi), i = 1,2, be 1 2 the corresponding terms, we shall estimate the norms of Z1 Z2 and η1 η2 in terms of − − that of Y1 Y2, where the latter one is more tractable due to the representation formula − Y = EG[ξ]. Unlike [13], we prove the estimates via PDE arguments. t t Therestofthepaperisorganizedasfollows. InSection2weintroducetheG-martingales and the involved spaces. In Section 3 we propose the new norm for η and provide some estimates. Finally in Section 4 we establish the complete representation theorem for G- martingales. 2 Preliminaries In this section we introduce G-expectations and G-martingales. We shall focus on a simple setting in which we will establish the martingale representation theorem. However, these notions can be extended to much more general framework, as in many publications in the literature. We start with some notations in multiple dimensional setting. Fix a dimension d. Let Rd and Sd denote the sets of d-dimensional column vectors and d d-symmetric matrices, × respectively. For σ ,σ Sd, σ σ (resp. σ < σ ) means that σ σ is nonnegative 1 2 1 2 1 2 2 1 ∈ ≤ − (resp. positive) definite, and we denote by [σ ,σ ] the set of σ Sd satisfying σ σ σ . 1 2 1 2 ∈ ≤ ≤ Throughout the paper, we use 0 to denote the d-dimensional zero vector or zero matrix, and I the d d identity matrix. For x,x˜ Rd, γ,γ˜ Sd, define d × ∈ ∈ x x˜ := xTx˜, x := √x x, and γ : γ˜ := tr(γγ˜), γ := √γ :γ, (2.1) · | | · | | where xT denotes the transpose of x. One can easily check that γ :γ˜ γ γ˜ , and γ γ˜ γ implies that γ˜ γ . (2.2) | | ≤ | || | − ≤ ≤ | |≤ | | 3 2.1 Conditional G-expectations We fix a finite time interval [0,T], and two constant matrices 0 < σ < σ in Sd. Define 1 G(γ) := sup (σ2 : γ), for all γ Sd. (2.3) 2 ∈ σ [σ,σ] ∈ Let Ω := ω C([0,T],Rd) :ω = 0 be the canonical space, B the canonical process, and 0 ∈ F := FB the filtration generated by B. For ξ = ϕ(B ), where ϕ :Rd R is a bounded and (cid:8) (cid:9) T → Lipschitz continuous function, following Peng [6] we define the conditional G-expectation EG[ξ]:= u(t,B ) where u is the (unique) classical solution of the following PDE on [0,T]: t t ∂ u+G(∂ u) = 0, u(T,x) = ϕ(x). (2.4) t xx Let denote the set of random variables ξ = ϕ(B , ,B ) for some 0 t < < Lip t1 ··· tn ≤ 1 ··· t T and some Lipschitz continuous function ϕ. One may define EG[ξ] in the same spirit, n t ≤ by defining it backwardly over each interval [t ,t ]. In particular, when t = 0 we define i i+1 EG[ξ]:= EG[ξ]. 0 For any p 1, define ≥ ξ p := EG[ξ p], ξ . (2.5) p ip k kLG | | ∈ L p Clearly this defines a norm in Lip. Let LG denote the closure of Lip under the norm k·kLpG, taking the quotient as in the standard literature. One can easily extend the conditional G-expectation to all ξ 1. ∈ LG We next provide an equivalent formulation of conditional G-expectations by using the quasi-sure stochastic analysis, initiated by Denis and Martini [2] for superhedging problem undervolatility uncertainty. Let denotethespaceofF-progressivelymeasurableprocesses A taking values in [σ,σ]. Denoting by P the Wiener measure, we define 0 t := Pσ := P (Xσ) 1 :σ where Xσ := σ dB , P -a.s. (2.6) 0 − t s s 0 P ◦ ∈A n o Z0 Then B is a P-martingale for each P . Following [2], we say ∈P a property holds -quasi surely, abbreviated as -q.s., if it holds P-a.s. for all P .(2.7) P P ∈ P It was proved in Denis, Hu and Peng [1] that: EG[ξ]= supEP[ξ], ξ 1. (2.8) G P ∈ L ∈P The result was extended by Soner, Touzi and Zhang [10] to conditional G-expectations: for any P and any t [0,T], ∈P ∈ EGt [ξ] = ess supP EPt′[ξ], ξ 1G, where (t,P) := P′ : P′ = P on t . (2.9) P (t,P) ∈ L P ∈ P F ′∈P n o 4 We remark that Peng [5] had similar ideas, in the contexts of strong formulation. We finally note that EG obviously satisfies the following subadditivity: t EG[ξ +ξ ] EG[ξ ]+EG[ξ ], for any ξ ,ξ 1. (2.10) t 1 2 t 1 t 2 1 2 G ≤ ∈ L 2.2 Stochastic integrals First notice that, there exists a Sd-valued process B such that B BT B is a G- t t t h i − h i martingale. In fact, under each P , B is the same as the quadratic variation of the ∈ P h i P-martingale B, and consequently, d σ2 B σ2, -q.s. (2.11) t ≤ dth i ≤ P Naturally we call B the quadratic variation of B. Next, we call an F-progressively mea- h i surable process Z with appropriate dimension is an elementary process if it takes the form Z = in=−01Zti1[ti,ti+1) for some 0 = t0 < ··· < tn ≤ T and each component of Zti is in Lip. Let 0 denote the space of Rd-valued elementary processes. For any p 1, define HPG ≥ T p Z p := EG (Z ZT) : d B ) 2 , Z 0; (2.12) p t t t G k kHG h(cid:16)Z0 h i (cid:17) i ∈H and let HGp denote the closure of HG0 under the norm k·kHpG. Now for each Z 0, we define its stochastic integral: ∈ HG t n 1 − Z dB := Z [B B ], (2.13) Z0 s· s i=0 ti · ti+1∧t− ti∧t X One can easily prove the Burkholder-Davis-Gundy Inequality: for any p > 0, there exist constants 0< c < C < such that p p ∞ t c Z p EG sup Z dB p C Z p . (2.14) p p s s p p k kHG ≤ h0≤t≤T |Z0 · | i ≤ k kHG p Then one can extend the stochastic integral to all Z . ∈ HG 2.3 G-martingales Oneimportant feature of conditional G-expectations is the time consistency, which can also be viewed as dynamic programming principle: EG EG(ξ) = EG[ξ], for all ξ 1 and 0 s < t T. (2.15) s t s G ∈ L ≤ ≤ h i We recall that a process Y is called a G-martingale if EG[Y ] = Y for all 0 s < t T. (2.16) s t s ≤ ≤ 5 Therefore, Y is a G-martingale if and only if Y = EG[ξ] for ξ = Y . t t T It is clear that tZ dB is a G-martingale for all Z 1. In particular, the canonical 0 s s ∈ HG process B is a G-martingale and is called a G-Brownian motion. However, G-martingales R has a richer structure. Let 0 be the space of Sd-valued elementary processes. Define MG T p η p := EG η dt , η 0; (2.17) p t G k kMG h(cid:16)Z0 | | (cid:17) i ∈ M and let MpG denote theclosureof M0G underthenormk·kMpG. Aninteresting fact observed by Peng [6] is that the following decreasing process is also a G-martingale: 1 t t K := η :d B G(η )ds, η 1. (2.18) − t 2 s h is − s ∈ MG Z0 Z0 Consequently, the following process Y is always a G-martingale: t t 1 t Y = Y + Z dB G(η )ds η :d B , Z 1, η 1. (2.19) t 0 s· s− s − 2 s h is ∈ HG ∈MG Z0 hZ0 Z0 i On the other hand, for any ξ , by Peng [7] there exist Z 1,η 1 such that ∈ Lip ∈ HG ∈ MG Y := EG[ξ] satisfies (2.19). In particular, when ξ = ϕ(B ), for the classical solution u of t t T PDE (2.4), we have: Y = u(t,B ), Z = ∂ u(t,B ), η = ∂ u(t,B ). (2.20) t t t x t t xx t Our goal of this paper is to answer the following natural question proposed by Peng [7]: For what ξ do there exist unique Z 1 and η 1 satisfying (2.19)? (2.21) ∈ HG ∈ MG The problem was partially solved by Soner, Touzi and Zhang [10], which introduced the following norm: ξ p := EG sup EG[ξ ] p , ξ . (2.22) k kLpG 0 t T t | | ∈ Lip h ≤ ≤ i (cid:0) (cid:1) Let LpG denote the closure of Lip under the norm k·kLpG. Then for any ξ ∈ L2G, there exist unique Z 2 and an increasing process K with K = 0 such that ∈ HG 0 t Y := EG[ξ]= Y + Z dB K and Z + K C ξ . (2.23) t t 0 s s t 2 T 2 L2 Z0 · − k kHG k kLG ≤ k k G Itwasprovedindependentlyby[10]andSong[12]that ξ Lp Cp,q ξ q forany1 p < q. k k G ≤ k kLG ≤ Moreover, the above representation was extended by [12] to the case p >1. 6 2.4 Summary of notations For readers’ convenience, we collect here some notations used in the paper: The inner product , the trace operator :, and the norms x , γ are defined by (2.1). • · | | | | The function G, Gα and G are defined by (2.3), (3.1), and (3.5), respectively. ε • The class of probability measures , and the G-expectation EG are defined by (2.6) • P and (2.8), respectively. The norms ξ p and ξ Lp for ξ are defined in (2.5) and (2.22), respectively. • k kLG k k G The norm Z p for Z is defined in (2.12). • k kHG The norm η p for η is defined in (2.17). • k kMG The norm Y Dp for c`adla`g processes Y, see also (2.22), is defined by: • k k G Y p := EG sup Y p . (2.24) k kDpG 0 t T | t| h ≤ ≤ i The operator α is defined by (3.2). • Et1,t2 The constants c ,C are defined by (3.4). 0 0 • The function δ is defined by (3.7). n • Thennewnorms η M and η M for η aredefinedby(3.11)and(3.16), respectively. • k k G k k ∗G The space 1 and class are defined by (3.17) and (3.18), respectively. • MG0 P0 • The new metric dG,p(ξ1,ξ2) for ξ is defined by (4.3), and L∗Gp is the corresponding closure space. For 0 s t T, the shifted canonical process Bs is defined by: t • ≤ ≤ ≤ Bs := B B . (2.25) t t s − 3 A new norm for η Our main contribution of the paper is to introduce a norm for η. For that purpose, we shall introduce two nonlinear operators, one via PDE arguments and the other via probabilistic arguments. ThelatteroneisstronglymotivatedfromtheworkSong[13],andtheconnection between the two operators is established in Lemma 3.4 below. 3.1 The nonlinear operator via PDE arguments We first introduce a new nonlinear operator α on Lipschitz continuous functions, with a E parameter α Sd. Define ∈ 1 Gα(γ) = [G(γ +2α)+G(γ 2α)], γ Sd. (3.1) 2 − ∈ 7 Given 0 t < t T and a Lipschitz continuous function ϕ, define α (ϕ) := uα(t , ), ≤ 1 2 ≤ Et1,t2 1 · where uα is the unique viscosity solution of PDE on [t ,t ]: 1 2 ∂ uα+Gα(∂ uα)= 0, uα(t ,x) = ϕ(x). (3.2) t xx 2 Clearly Gα is strictly increasing and convex in γ. In particular, the above PDE is parabolic and is wellposed. We collect below some obvious properties of Gα and α, whose proofs are E omitted. Lemma 3.1 For any α Sd, ∈ (i) α satisfies the semigroup property: E α α (ϕ) = α (ϕ), for any 0 t < t <t T. (3.3) Et1,t2 Et2,t3 Et1,t3 ≤ 1 2 3 ≤ (ii) G α = Gα (cid:0)G = G0.(cid:1) − ≥ (iii) If ϕ = c is a constant, then α (c) = c+Gα(0)(t t ). Et1,t2 2 − 1 The next property will be crucial for our estimates. Let 1 1 c := the smallest eigenvalue of [σ2 σ2], and C := σ2 σ2 . (3.4) 0 0 2 − 2| − | Then clearly C c > 0 and σ2+c I σ2 c I . Denote, for ε c , 0 0 0 d 0 d 0 ≥ ≤ − ≤ 1 G (γ) := sup (σ2 : γ), where σ2 := σ2+εI , σ2 := σ2 εI . (3.5) ε 2 ε d ε − d σ∈[σε,σε] Lemma 3.2 (i) For any 0 < ε c and α,γ Sd, it holds that 0 ≤ ∈ G (γ)+εα Gα(γ) G(γ)+C α. (3.6) ε 0 | | ≤ ≤ | | (ii) Assume ϕ ϕ ϕ are Lipschitz continuous functions, and 0 t < t T. Then 1 2 ≤ ≤ ≤ ≤ EGε ϕ(x+Bt1) +εα(t t ) α (ϕ)(x) EG ϕ(x+Bt1) +C α(t t ). t2 | | 2 − 1 ≤ Et1,t2 ≤ t2 0| | 2 − 1 h i h i Proof. (i) We first prove the left inequality. Let α , ,α denote the eigenvalues of α, 1 d ··· and αˆ the diagonal matrix with components α1,··· ,αd. Then |α| = (α21 +···+α2d)12, and there exists an orthogonal matrix P such that PTαP =αˆ. Let cˆ denote a diagonal matrix ε whose diagonal components take values ε or ε. Now for any σ [σ ,σ ], by (3.5), we ε ε ε − ∈ have σ2+Pcˆ PT [σ2,σ2] and σ2 Pcˆ PT [σ2,σ2]. ε ε ε ε ∈ − ∈ 8 Then 2Gα(γ) = G(γ +2α)+G(γ 2α) − 1 (σ2+Pcˆ PT): (γ +2α)+(σ2 Pcˆ PT) :(γ 2α) ≥ 2 ε ε ε − ε − = σ2h: γ+2(Pcˆ PT): α = σ2 :γ+2cˆ : (PTαP) = σ2 :γi+2cˆ : αˆ. ε ε ε ε ε ε By the arbitrariness of σ and cˆ , we get ε ε d Gα(γ) G (γ)+ε α G (γ)+εα. ε i ε ≥ | |≥ | | i=1 X We now prove the right inequality of (3.6). For any σ ,σ [σ,σ], we have 1 2 ∈ σ2 : (γ +2α)+σ2 :(γ 2α) = (σ2 +σ2) :γ +2(σ2 σ2): α. 1 2 − 1 2 1 − 2 Note that 1 σ2 (σ2 +σ2) σ2, [σ2 σ2] σ2 σ2 σ2 σ2. ≤ 2 1 2 ≤ − − ≤ 1 − 2 ≤ − Then, by (2.2), σ2 :(γ +2α)+σ2 : (γ 2α) 4G(γ)+4C α. 1 2 − ≤ 0| | Since σ ,σ are arbitrary, we prove the right inequality of (3.6), and hence (3.6). 1 2 (ii) One can easily check that EGε ϕ(x+Bt1) +εα(t t ) = vα(t ,x), t2 | | 2 − 1 1 h i EG ϕ(x+Bt1) +C α(t t ) = vα(t ,x), t2 0| | 2 − 1 1 h i where vα,vα are the unique viscosity solution of the following PDEs on [t ,t ]: 1 2 ∂ vα+G (∂ vα)+εα = 0, vα(t ,x) = ϕ(x); t ε xx 2 | | ∂ vα+G(∂ vα)+C α =0, vα(t ,x) = ϕ(x). t xx 0 2 | | Then the statement follows directly from (3.6) and the comparison principle of PDEs. 3.2 The nonlinear operator via probabilistic arguments For any n 1, denote tn := iT, i= 0, ,n, and define ≥ i n ··· n 1 − δn(t) = ( 1)i1[tn,tn ), t [0,T]. (3.7) − i i+1 ∈ i=0 X This function was introduced in [13] which plays a key role for constructing a new norm. According to [13], we have 9 Lemma 3.3 For any η 1, it holds that lim EG T G(η )δ (t)dt = 0. ∈ MG n→∞ 0 t n h i The next lemma establishes the connection between δRand Gα. n Lemma 3.4 Let 0 s < t T and α Sd. ≤ ≤ ∈ (i) For any γ Sd, we have ∈ t 1 lim EG [αδ (r)+ γ]: d B = Gα(γ)(t s). (3.8) n s n 2 h ir − →∞ hZs i (ii) For any x Rd and any Lipschitz continuous function ϕ, we have ∈ t lim EG δ (r)α : d B +ϕ(x+Bs) = α (ϕ)(x). (3.9) s n r t s,t n h i E →∞ hZs i Proof. (i) Fix n such that 2T < t s. Note that n − EG tn2i+2[αδ (r)+ 1γ]: d B tn2ihZtn2i n 2 h iri 1 1 = EGtn2i (2γ+α) : [hBitn2i+1 −hBitn2i]+(2γ−α) : [hBitn2i+2 −hBitn2i+1] h 1 1 i = EGtn2i (2γ+α) : [hBitn2i+1 −hBitn2i]+EGtn2i+1 (2γ−α) : [hBitn2i+2 −hBitn2i+1] = EGtn2ih(21γ+α) : [hBitn2i+1 −hBitn2i]+G(γ −(cid:2)2α)Tn (cid:3)i h 1 Ti = EGtn2i (2γ+α) : [hBitn2i+1 −hBitn2i] +G(γ −2α)n h T T i = G(γ +2α) +G(γ 2α) = Gα(γ)(tn tn). n − n 2i+2 − 2i Similarly, for any i < j, EG tn2j[αδ (r)+ 1γ] :d B = Gα(γ)(tn tn). tn2ihZtn2i n 2 h iri 2j − 2i Now assume tn s < tn tn t < tn . Then 2i ≤ 2i+1 ≤ 2j ≤ 2j+2 t 1 EG [αδ (r)+ γ]: d B Gα(γ)(t s) s n 2 h ir − − (cid:12)(cid:12)(cid:12)EGhZst[αδ (r)+ 1γ]: d B i EG tn2j [αδ(cid:12)(cid:12)(cid:12) (r)+ 1γ]: d B ≤ (cid:12) s hZs n 2 h iri− s hZtn2i+2 n 2 h iri(cid:12) (cid:12) (cid:12) +(cid:12) Gα(γ)(tn tn ) Gα(γ)(t s) (cid:12) 2j − 2i+2 − − ≤ EGs(cid:12)(cid:12)(cid:12)h(cid:12)[Zstn2i+2+Ztn2tj][αδn(r)+ 21γ]: dh(cid:12)(cid:12)(cid:12)Bir(cid:12)i+ 2nT|Gα(γ)| 2T (cid:12) 1 2T (cid:12) σ(cid:12)2 [α + γ ]+ Gα(γ) 0, a(cid:12)s n , ≤ n | || | 2| | n | | → → ∞ 10