ebook img

Averaging principle for one dimensional stochastic Burgers equation PDF

0.37 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Averaging principle for one dimensional stochastic Burgers equation

Averaging principle for one dimensional stochastic Burgers equation a b ad c Zhao Dong Xiaobin Sun Hui Xiao Jianliang Zhai ∗ † ‡ § a. University of Chinese Academy of Sciences; Academy of Mathematics and Systems Science, Chinese Academy of Sciences (CAS), Beijing, 100190, China. 7 1 b. School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou, 221116, China. 0 c. School of Mathematical Science, University of Science and Technology of China, Hefei, 230026, China. 2 n d. Universit´e de Bretagne-Sud, CNRS UMR 6205, LMBA, campus de Tohannic, 56000 Vannes, France. a J 0 2 Abstract. This paper considers the averaging principle for one dimen- ] R sional stochastic Burgers equation with slow and fast time-scales. Under P some suitable conditions, we show that the slow component strongly . h converges to the solution of the corresponding averaged equation. Mean- t a while, when there is no noise in the slow equation, we also prove the m slow component weakly converges to the solution of the corresponding [ averaged equation with the convergent order of 1 r, for any 0 < r < 1. − 1 v Keywords: Stochastic Burger’s equation; Averaging principle; Ergod- 0 icity; Invariant measure; Strong convergence; Weak convergence 2 9 Mathematics Subject Classification (2000). Primary: 34D08, 5 34D25; Secondary: 60H20 0 . 1 0 1 Introduction 7 1 : Almost all physical systems have a certain hierarchy in which not all components evolve at v i the same rate, i.e., some of components vary very rapidly, while others change very slowly, X see [21]. Averaging principle provides an effective tool to analyze the multiple time-scales r a dynamical systems. Besides, the method of averaging principle enormously reduces the com- putational load although computer technology is highly efficient nowadays. The Michaelis- Menten approximation technique for the enzyme activation reactions [20] is a famous and successful example on this topic. The theory of averaging principle has a long and rich history, which has been applied in many fields, such as, celestial mechanics, wireless communication, signal processing, os- cillation theory and radiophysics. The first result for deterministic systems was studied by Bogoliubov [1], then by Volosov [24] for ordinary differential equations. In 1968, the theory of averaging principle for stochastic differential equations driven by Brownian mo- tion was firstly proved by Khasminskii [17]. Since then, averaging principle for stochastic ∗E-mail: [email protected] †E-mail: [email protected] ‡E-mail: [email protected] §E-mail: [email protected] 1 reaction-diffusion systems has become an active research area which attracted a number of mathematicians. For example, Freidlin and Wentzell [11] established the theory of averaging principle from a deeper understanding of this phenomenon. Cerrai and Freidlin [4] proved the averaging principle for a general class of stochastic reaction-diffusion systems, which extended the classical Khasminskii-type averaging principle for finite dimensional systems to infinite dimensional systems. We refer to [2, 3, 12, 13, 14, 15, 18, 19, 23] and references therein for more interesting results on this topic. In this paper, we consider the averaging principle for one dimensional stochastic Burgers equation. To the best of our knowledge, this is the first article to deal with highly nonlinear term on this topic. Considering the following stochastic fast-slow system on a bounded interval [0,1] R: ⊂ ∂Xε(ξ) 1 ∂ ∂WQ1 t = ∆Xε(ξ)+ (Xε(ξ))2 +f(Xε(ξ),Yε(ξ)) + (t,ξ), Xε = x ∂t t 2∂ξ t t t ∂t 0  ∂Ytε(ξ) = 1 (cid:2)∆Yε(ξ)+g(Xε(ξ),Yε(ξ)) + 1 ∂WQ2(t,ξ),(cid:3) Yε = y (1.1)  ∂t ε t t t √ε ∂t 0  Xε(0) = Xε(1) = Yε(0) = Yε(1) = 0, t (cid:2)t t t (cid:3)   where ε is a small and positive parameter describing the ratio of time scale between the slow component Xε and fast component Yε. The coefficients f and g are suitable functions. t t WQ1 and WQ2 areH(:= L2(0,1))-valuedmutually independent Q andQ -Wiener p{rotces}ste≥s0on co{mptlet}et≥p0robability space (Ω,F,P) with a filtration F ,t 10 satis2fying the t { ≥ } usual conditions. When ε goes to 0, the slow component Xε tends to process X¯, which is the solution of the averaged equation: 1 ∂ dX¯ = ∆X¯ dt+ (X¯ )2dt+f¯(X¯ )dt+dWQ1(t), t t 2∂ξ t t (1.2)  ¯ X = x.  0 with average f¯(x)= f(x,y)µx(dy), where µx denotes the unique invariant measure for H the fast motion equation when we fix slow variable x H ( see equation (4.18) below ). R ∈ We aim to study the speed of Xε convergent to X¯ in the strong and weak sense respec- tively. Under some conditions, the result of strong convergence is stated as follows: For any initial value x Hα with α (1, 3] and y H, p,T > 0, we have • ∈ ∈ 2 ∈ 1 1 limE sup Xε X¯ 2p C 4p 0 (ε 0). (1.3) ε 0 0 t T | t − t| ≤ logε → → → ≤ ≤ (cid:16)− (cid:17) where C is a positive constant which only depends on T, p, x and y . α | | | | Furthermore, if there is no noise (i.e., Q = 0) in system (1.1), under some conditions, 1 the result of weak convergence is stated as follows: For any initial value x Hθ with θ (0,1], y H, φ C2, r (0,1), δ (0, 1), we • ∈ ∈ ∈ ∈ b ∈ ∈ 2 have E[φ(Xε(t))] E[φ(X¯(t))] C(1+t−θ+1θ+2δ)ε1−r, (1.4) − ≤ where C is a posit(cid:12)ive constant which only de(cid:12)pends on T, x , y ,δ and φ. (cid:12) (cid:12) θ | | | | 2 Compared with the strong convergence, the regularity of initial value x in weak conver- gence is weaker, but the rate of the convergence is pleasant in this case. The idea of the proof follows the procedure inspired by [2], in which the authors consider a relative simple framework (without thenonlinear term andf is bounded). In ourcase, it isquite non-trivial to deal with the nonlinear term and unbounded f. The most challenge in studying the strong convergence is how to deal with the nonlinear terminBurger’sequation. Toovercomethisdifficulty, wefirstgivesomeestimatesoftheslow component Xε andfast component Yε inH. Secondly, by using thesmoothness ofsemigroup t t et∆ and interpolation inequality, we can further obtain sup Esup Xε p C , ε∈(0,1) t∈[0,T]| t|α ≤ p,T which is a key step for proving the (1.3), where is Sobolev norm. Finally, applying the α |·| skill of stopping time and following the procedure inspired by [12], we can obtain the main result. Finally, we refer that, in recent years, there are many interesting results for stochastic Burger’s equation [5, 6, 7, 8, 9, 10, 16, 22]. The paper is organized as follows. In the next section, we introduce some notation and assumptions that we use throughout the paper. In section 3, we give out our main results. Section 4 and Section 5 are devoted to prove the strong convergence and weak convergence respectively. Along the paper, C, C and C will denote positive constants which may change from p p,T line to line, where C depends on p, C depends on p,T. p p,T 2 Preliminaries Denote by the usual norm of the space Lp(0,1),p 1, and by the usual supremum Lp norm of L |·(|0,1). We consider the separable Hilbert s≥pace H = L2|(·0|∞,1) (the inner product ∞ denoted , ). As usual, for k N,p 1, Wk,p(0,1) is the Sobolev space of all functions in h· ·i ∈ ≥ Lp(0,1)whosedifferentials belongtoLp(0,1)uptotheorderk. RecallthattheusualSobolev space Wk,p(0,1) can be extended to the Ws,p(0,1), for s R. Set Hk(0,1)=ˆWk,2(0,1) and ∈ H1(0,1) is the subspace of H1(0,1) of all functions whose trace at 0 and 1 vanishes. We 0 define the unbounded self-adjoint operator A by ∂2 Ax = ∆x = x, x D(A) = H2(0,1) H1(0,1). ∂ξ2 ∈ ∩ 0 NotethattheoperatorAistheinfinitesimal generatorofastronglycontinuous semigroup in H, which we denote by etA,t 0. It is well known that etA (t 0) have smoothing ≥ ≥ properties, that is, for any s ,s R with s s , r 1, etA : Ws1,r(0,1) Ws2,r(0,1) and 1 2 1 2 ∈ ≤ ≥ → there exists a constant C which depending on s ,s ,r such that 1 2 etAz Ws2,r(0,1) C 1+t(s1−s2)/2 z Ws1,r(0,1), z Ws1,r(0,1). (2.5) | | ≤ | | ∈ ( A)α is the power of the op(cid:0)erator A, an(cid:1)d is the norm of D(( A)α/2) which is α − − | · | − equivalent to the norm of Hα(0,1)=ˆWα,2(0,1). We have = , and denote it by 0 L2 |·| |·| |·| for simplicity. Then, e (ξ) = √2sin(kπξ), ξ [0,1],k N k ∈ ∈ are eigenfunctions of A with eigenvalue λ = k2π2. k − 3 Define the bilinear operator B(x,y) : H H1(0,1) H 1(0,1), B(x,y) = x ∂ y, × 0 → 0− · ξ and the trilinear operator 1 b(x,y,z) : H H1(0,1) H R, b(x,y,z) = x(ξ)∂ y(ξ)z(ξ)dξ. × 0 × → ξ Z0 It is convenient to put B(x) = B(x,x), for x H1(0,1). ∈ 0 The following several properties of b( , , ) and B( , ) are well-known (for example see · · · · · [9]) and will be used later on. Lemma 2.1 For any x,y H1(0,1), ∈ 0 1 b(x,x,y) = b(x,y,x), b(x,y,y) = 0. −2 (cid:3) Lemma 2.2 Suppose α 0 (i = 1,2,3) satisfies one of the following conditions i ≥ (1) α = 1(i = 1,2,3),α +α +α 1, i 6 2 1 2 3 ≥ 2 (2) α = 1 for some i, α +α +α > 1, i 2 1 2 3 2 then b is continuous from Hα1(0,1) Hα2+1(0,1) Hα3(0,1) to R, i.e. × × b(x,y,z) C x y z . ≤ | |α1| |α2+1| |α3 (cid:12) (cid:12) (cid:3) (cid:12) (cid:12) The following inequalities can be derived by the above lemma. Corollary 2.1 For any x H1(0,1), we have ∈ 0 (1) B(x) C x 2. (2)|B(x)| ≤ |C|1x x . (cid:3) 1 1 | |− ≤ | |·| | Lemma 2.3 For any x,y H1(0,1), we have ∈ 0 (1) B(x) B(y) C x y ( x + y ). 1 1 1 | − | ≤ | − | | | | | (2) B(x) B(y) C x y ( x + y ). (cid:3) 1 1 1 | − |− ≤ | − | | | | | With the notations we have introduced, system (1.1) can be rewritten as the following form: dXε = [AXε +B(Xε)+f(Xε,Yε)]dt+dWQ1, Xε = x t t t t t t 0 dYε = 1[AYε +g(Xε,Yε)]dt+ 1 dWQ2, Yε = y (2.6)  t ε t t t √ε t 0  Xε(0) = Xε(1) = Yε(0) = Yε(1) = 0, t t t t where Q -Wiener process WQ1 is given by 1 t ∞ WQ1 = √α βke , t 0, t k t k ≥ k=1 X where αk ≥ 0 satisfies Q1ek = αkek with ∞k=1αk < +∞, and {βk}k∈N is a sequence of mutually independent standard Brownian motions. We also assume TrQ < . 2 P ∞ 4 Now we make the following basic assumptions on the coefficients f and g throughout this paper. (H1) The functions f,g : H H H satisfy the global Lipschitz condition, i.e., there exist × → positive constants L and L such that for any x ,x ,y ,y H, f g 1 2 1 2 ∈ f(x ,y ) f(x ,y ) L ( x x + y y ) (2.7) 1 1 2 2 f 1 2 1 2 | − | ≤ | − | | − | and g(x ,y ) g(x ,y ) L ( x x + y y ). (2.8) 1 1 2 2 g 1 2 1 2 | − | ≤ | − | | − | (H2) The growth rate of nonlinear term g in fast component equation is smaller than the decay rate of operator ∆, i.e., η := λ L > 0. (2.9) 1 g − Refer to [7], we have Theorem 2.1 Suppose that the condition (H1) holds. For any given initial value x,y H, ∈ there exists a unique mild solution (Xε,Yε),t 0 to system (2.6) and for all T > 0, { t t ≥ } (Xε,Yε) C([0,T];H) C([0,T];H),P a.s.. ∈ × − t t t Xε = etAx+ e(t s)AB(Xε)ds+ e(t s)Af(Xε,Yε)ds+ e(t s)AdWQ1, t − s − s s − s (2.10)  Yε = etA/εy +Z01 te(t s)A/εg(Xε,YεZ)0ds+ 1 te(t s)A/εdWQZ20.  t ε 0 − s s √ε 0 − s R R  At the end of this section, we show the classical Gronwall inequality and a Gronwall-type inequality, which will be used later. Lemma 2.4 (Gronwall inequality) Let α,β be real-value functions defined on [0,T], assume that β and u are continuous and that the negative part of α is integrable on every closed and bounded subinterval of [0,T]. (a) If β is non-negative and if u satisfies the integral inequality t u α + β u ds, t [0,T], t t s s ≤ ∀ ∈ Z0 then t t u α + α β exp β dr ds, t [0,T]. t t s s r ≤ ∀ ∈ Z0 (cid:18)Zs (cid:19) (b) If, in addition, the function α is nondecreasing, then t u α exp β dr , t [0,T]. t t r ≤ ∀ ∈ (cid:18)Z0 (cid:19) Lemma 2.5 (Gronwall-type inequality) For any given α (0,1), β (0,1), if f(t) is a non-negative real-valued integrable function ∈ ∈ on [0,T], and the following inequality holds for some positive constants C ,C 1 2 t f(t) C t α +C (t s) βf(s)ds, t [0,T], 1 − 2 − ≤ − ∀ ∈ Z0 then there exists some k N satisfying ∈ f(t) CC1t−αeCC2k, t [0,T], ≤ ∀ ∈ where C is a positive constant depending on α,β and T. 5 Proof By iterating and Fubini theorem, we have t s f(t) C t α +C (t s) β C s α +C (s r) βf(r)dr ds 1 − 2 − 1 − 2 − ≤ − − Z0 (cid:20) Z0 (cid:21) t t t C t α +C C (t s) βs αds+C2 f(r) (t s) β(s r) βds dr ≤ 1 − 1 2 − − − 2 − − − − Z0 Z0 (cid:20)Zr (cid:21) t CC (C +1)t α +CC2 (t r)1 2βf(r)dr, ≤ 1 2 − 2 − − Z0 where C is a constant depending on α,β,T. If 1 2β 0, then we can easily obtain the result by Lemma 2.4. However, if 1 2β < 0, − ≥ − then after iterating finite times, there exist γ 0,k ,k N such that 1 2 ≥ ∈ t f(t) CC (Ck1 +1)t α +CCk2 (t r)γf(r)dr ≤ 1 2 − 2 − Z0 t CC (Ck1 +1)t α +CCk2 f(r)dr. ≤ 1 2 − 2 Z0 Finally by Lemma 2.4, we obtain f(t) ≤ CC1(C2k1 +1)t−αeCC2k2 CC1t−αeCC2k, ≤ for some k N. (cid:3) ∈ 3 Main results In this paper, we mainly study the speed of Xε convergent to X¯ in the strong and weak sense respectively. We give the main results in this section and provide the proofs in the next two sections. More conditions are needed to study the strong and weak convergence, which are stated as follows: (H3) There exist constants α (1, 3) and β (0, 1) such that ∈ 2 ∈ 2 ∞ αkλkα+2β−1 < +∞. (3.1) k=1 X (H4) Assume f and g are twice differentiable with respect to first and second variable respectively, and following inequalities hold: (1)For any x,y H and h,k H, ∈ ∈ D2 f(x,y)(h,k) C h k . | xx | ≤ | |·| | (2)For any x,y H and h,k H, ∈ ∈ D g(x,y) h C h , D2 g(x,y)(h,k) C h k . | y · | ≤ | | | yy | ≤ | |·| | (3) For any x,y H, there exists a positive constant C such that ∈ f(x,y),x C(1+ x 2). |h i| ≤ | | Now, our main results are following: 6 Theorem 3.1 (Strong convergence) Assume (H1), (H2) and (H3) hold, then for any x Hα,y H, p,T > 0, we have ∈ ∈ 1 1 E sup Xε X¯ 2p C 4p 0 (ε 0), | t − t| ≤ logε → → 0 t T ≤ ≤ (cid:16)− (cid:17) where C is a positive constant which only depends on T, p, x and y . α | | | | Theorem 3.2 (Weak convergence 1) Assume (H1), (H2) and (H4) hold, and Q = 0, 1 then for any θ (0,1], r (0,1), x Hθ, y H, φ C2, t (0,T], δ (0, 1), and for any ∈ ∈ ∈ ∈ ∈ b ∈ ∈ 2 small enough ε (0,1), we have ∈ E[φ(Xtε)]−E[φ(X¯t)] ≤ C(1+t−θ+1θ+2δ)ε1−r, (cid:12) (cid:12) where C is a positive constant which only depends on T, x , y ,δ and φ. (cid:12) (cid:12) θ | | | | Theorem 3.3 (Weak convergence 2) Under the same assumptions in Theorem 3.2 with θ (1, 3), then for any t (0,T], r (0,1), we can obtain ∈ 2 ∈ ∈ E[φ(Xε(t))] E[φ(X¯(t))] Cε1 r, − − ≤ (cid:12) (cid:12) where C is a positive consta(cid:12)nt which only depends on(cid:12)T, x θ, y and φ. | | | | Remark 3.1 Theorem 3.3 shows that if the initial value x has higher regularity, then the control constant becomes quite terse. Remark 3.2 We assume (3) of (H4) holds and Q = 0 in studying the weak convergence 1 for several technical difficulty, for example, see Lemma 5.8, that for any x H there exists a positive constant C such that sup Xε(t) C(1+ x ), and then sup∈ eXε(t)k C, which plays an important role in pt∈r[o0,vTi]n|g Theo|r≤em 3.2 a|nd| Theorem 3.3, st∈ee[0,ST]ub|secti|on≤5.5. For the case of Q = 0, these estimates are failed and we do not know how to deal with this 1 6 case recently. 4 Strong convergence In this section, we intend to study the strong convergence. We first give some estimates of the solution (Xε,Yε). Secondly, following the idea inspired by Khasminskii in [17], we t t introduce an auxiliary process (Xˆε,Yˆε) H H and also give their estimates. Meanwhile, we show the error of Xε Xˆε. Fintallyt, w∈e stu×dy the average equation and apply the skill of stopping time and followt −ing tthe procedure inspired by [12], the error of Xˆε X¯ε is obtained. t − t Hence, the strong convergence is easily proved. Notice that we always assume condition (H3) holds in this section. ε ε 4.1 Some priori estimates of (X ,Y ) t t At first, we prove uniform bounds with respect to ε (0,1) for p-moment of the solutions ∈ Xε and Yε for the system (2.6). t t 7 Lemma 4.1 For any x,y H, p 2 and T > 0, there exists a constant C > 0 such that p,T ∈ ≥ sup sup E Xε 2p C (1+ x 2p + y 2p) (4.1) | t| ≤ p,T | | | | ε (0,1)0 t T ∈ ≤ ≤ and sup sup E Yε 2p C (1+ x 2p + y 2p). (4.2) | t | ≤ p,T | | | | ε (0,1)0 t T ∈ ≤ ≤ Proof According to Itˆo formula 2p t 2p t E Yε 2p = y 2p + E Yε 2p 2 AYε,Yε ds+ E Yε 2p 2 g(Xε,Yε),Yε ds | t | | | ε | s| − h s s i ε | s| − h s s s i Z0 Z0 p t 2p(p 1) t + E Yε 2p 2TrQ ds+ − E Yε 2p 4 Q Yε 2ds. ε | s | − 2 ε | s | − | 2 s | Z0 Z0 p Then, there exists a constant γ > 0 such that d 2p 2p E Yε 2p = E Yε 2p 2 AYε,Yε + E Yε 2p 2 g(Xε,Yε),Yε dt | t | ε | t | − h t t i ε | t | − h t t t i p (cid:16) 2p((cid:17)p 1) h i + E Yε 2p 2TrQ + − E Yε 2p 4 Q Yε 2 ε | t | − 2 ε | t | − | 2 t | 2pλ 2p n p o 1E Yε 2p + E Yε 2p 2 C Yε +L ( Xε Yε + Yε 2) ≤ − ε | t | ε | t | − | t | g | t|·| t | | t | p 2np(p 1) (cid:2) (cid:3)o + E Yε 2p 2TrQ + − E Yε 2p 2TrQ ε | t | − 2 ε | t | − 2 pγ C C E Yε 2p + pE Xε 2p + p, (4.3) ≤ − ε | t | ε | t| ε where the last inequality by the fact of λ L > 0 in (H2) and Young inequality. 1 g − Hence, by comparison theorem C t E|Ytε|2p ≤ |y|2pe−pεγt + εp e−pεγ(t−s) 1+E|Xsε|2p ds. (4.4) Z0 (cid:16) (cid:17) By Itˆo formula again t t Xε 2p = x 2p +2p Xε 2p 2 AXε,Xε ds+2p Xε 2p 2 B(Xε),Xε ds | t| | | | s| − h s si | s| − h s si Z0 Z0 t t +2p Xε 2p 2 f(Xε,Yε),Xε ds+2p Xε 2p 2 Xε,dWQ1 | s| − h s s si | s| − h s s i Z0 Z0 t t +p Xε 2p 2TrQ ds+2p(p 1) Xε 2p 4 Q Xε 2ds. | s| − 1 − | s| − | 1 s| Z0 Z0 p Then, similar as we did in (4.3), we have d E Xε 2p = 2pE Xε 2p 2 AXε,Xε +2pE Xε 2p 2 f(Xε,Yε),Xε dt | t| | t| − h t ti | t| − h t t ti (cid:16) (cid:17) h i +pE Xε 2p 2TrQ +2p(p 1)E Xε 2p 4 Q Xε 2 | t| − 1 − | t| − | 1 t| C E Xε 2p +C E Yε 2p +C . n p o ≤ p | t| p | t | p 8 Hence, by comparison theorem t E Xε 2p x 2peCpt +C eCp(t s) 1+E Yε 2p ds. (4.5) | t| ≤ | | p − | s | Z0 (cid:0) (cid:1) Combining (4.4) and (4.5), for any t T ≤ C t s E|Ytε|2p ≤ Cp,T(1+|x|2p +|y|2p)+ εp e−pεγ(t−s) E|Yrε|2pdrds Z0 Z0 C t + p e−pεγ(t−s)ds. ε Z0 With a change of variable, we have t t−r E Yε 2p C (1+ x 2p + y 2p)+C ε e pγsds E Yε 2pdr | t | ≤ p,T | | | | p − | r | Z0 hZ0 i t/ε +C e pγsds. p − Z0 t C (1+ x 2p + y 2p)+C E Yε 2pdr. ≤ p,T | | | | p | r | Z0 The Grownall’s inequality implies E Yε 2p C (1+ x 2p + y 2p), | t | ≤ p,T | | | | which also gives E Xε 2p C (1+ x 2p + y 2p). | t| ≤ p,T | | | | (cid:3) The proof is complete. In order to estimate the high-order norm of Xε, we recall the stochastic convolution t W (t) := e(t s)AdWQ1. A − s Z0 Then we have the following result: Lemma 4.2 For any p,T > 0, we have E sup W (t) 2p C , (4.6) | A |α ≤ p,T 0 t T ≤ ≤ where α is the one in (H3). Proof By H¨older inequality, it is suffice to prove (4.6) holds when p is large enough. Using the factorization method, we can write sinπβ t W (t) = e(t s)A(t s)β 1Z ds, A − − s π − Z0 where β is the one in (H3) and s Z = e(s r)A(s r) βdWQ1. s − − − r Z0 9 For any T > 0, t [0,T], p is large enough such that 2p(1−β) < 1, we have ∈ 2p 1 − 2p−1 t 2p(1−β) 2p WA(t) α C (t s)− 2p−1 ds Z L2p(0,T;Hα) | | ≤ − | | (cid:18)Z0 (cid:19) Cptβ−21p Z L2p(0,T;Hα). ≤ | | Then sup W (t) 2p C Z 2p . (4.7) | A |α ≤ p,T| |L2p(0,T;Hα) t T ≤ Notice that ( A)α/2Z N(0,Q˜ ) which is a Gaussian random variable with mean zero and s s − ∼ covariance operator s Q˜ x = r 2βerA( A)αQ erA∗xdr. s − 1 − Z0 Then for any p 1, s > 0, by [6, Corollary 2.17] ≥ E ( A)α/2Z 2p C [Tr(Q˜ )]p s p s | − | ≤ s p = Cp Σk r−2βe−2rλkλαkαkdr (cid:18) Z0 (cid:19) 2sλk p = Cp Σkλkα+2β−1αk r−2βe−rdr (cid:18) Z0 (cid:19) ≤ Cp(Σkλkα+2β−1αk)p, (4.8) where the last inequality by the fact of 2sλk r 2βe rdr ∞r 2βe rdr < . − − − − ≤ ∞ Z0 Z0 Hence, (4.7) and (4.8) imply T E sup W (t) 2p C E Z 2pds C . | A |α ≤ p,T | s|α ≤ p,T 0≤t≤T Z0 (cid:3) Lemma 4.3 For any x Hα,y H, T > 0 and p > 0, there exists a positive constant C p,T ∈ ∈ independent of ε, such that E sup Xε 2p C ( x 2p + y 2p +1), (4.9) | t|α ≤ p,T | |α | | t [0,T] ∈ (cid:0) (cid:1) where α is the one in (H3). Proof It is also suffice to prove (4.9) holds when p is large enough. Recall that t t t Xε = etAx+ e(t s)AB(Xε)ds+ e(t s)Af(Xε,Yε)ds+ e(t s)AdWQ1. t − s − s s − s Z0 Z0 Z0 For the first term, it is well known that eAtx 2p x 2p. For the second term, according | |α ≤ | |α to (2.5) and Lemma 2.2, we have the following estimate t t e(t s)AB(Xε)ds e(t s)AB(Xε) ds (cid:12)Z0 − s (cid:12)α ≤ Z0 − s α (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.