ebook img

Improving Brownian approximations for boundary crossing problems PDF

0.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Improving Brownian approximations for boundary crossing problems

Bernoulli 19(1), 2013, 137–153 DOI: 10.3150/11-BEJ396 Improving Brownian approximations for 3 1 boundary crossing problems 0 2 n ROBERT KEENER a J Department of Statistics, University of Michigan, Ann Arbor, MI 48103, USA. E-mail: [email protected] 1 3 Donsker’s theorem shows that random walks behave like Brownian motion in an asymptotic ] sense.Thisresultcanbeusedtoapproximateexpectationsassociatedwiththetimeandlocation T ofarandomwalkwhenitfirstcrossesanonlinearboundary.Inthispaper,correctiontermsare S derived to improvetheaccuracy of these approximations. . h Keywords: asymptotic expansion; Donsker’stheorem; excess overthe boundary;random walk; t a stopping times m [ 1. Introduction and main results 1 v Let X,X ,X ,... be i.i.d. with mean zero and unit variance; take S =X + +X , 5 1 2 k 1 ··· k k 1, with S =0; and let W(t), t 0, be standard Brownian motion. By Donsker’s 2 ≥ 0 ≥ 6 theorem, if Wn is continuous and piecewise linear with 7 1. Wn(k/n)=Sk/√n, k=0,1,..., 0 then W W in [0, ) as n . Let b be a smooth function on [0, ) with b(0)>0, 3 n ⇒ C ∞ →∞ ∞ 1 such that : v τ =inf t 0: W(t) b(t) 0 i { ≥ ≥ } X is finite almost surely, and define r a τ =inf k/n 0: W (k/n) b(k/n) . n n { ≥ ≥ } Defining boundary levels b =b =√nb(k/n), (1) k k,n this stopping time can be written as τ =inf k 0: S b /n. n k k { ≥ ≥ } This is an electronic reprint of theoriginal article published by theISI/BS in Bernoulli, 2013, Vol. 19, No. 1, 137–153. This reprint differsfrom theoriginal in pagination and typographic detail. 1350-7265 (cid:13)c 2013ISI/BS 2 R. Keener As the form suggests, τ τ as n . This can be established by introducing n 0 ⇒ → ∞ p τ˜ =inf t 0: W(t) b(t) , arguing that τ τ˜ 0, and using the continuous map- n n n { ≥ ≥ } − → ping theorem, Theorem 5.1 of Billingsley [2], to show that τ˜ τ . Note that the Brow- n 0 nian path W() will be a continuity point for the transforma⇒tion W() τ whenever 0 · · τ =inf t 0: W(t)>b(t) , and this holds with probability one by the strong Markov 0 { ≥ } p property. We also have W (τ ) b(τ ) 0, and so n n n − → (τ ,W (τ )) (τ ,W(τ )) n n n 0 0 ⇒ as n . Thus if f is a bounded continuous function, →∞ Ef(τ ,W (τ )) Ef(τ ,W(τ )). (2) n n n 0 0 → For large n, the limit here provides a natural approximation for the expectation on the left-hand side. The main result of this paper provides correction terms of order 1/√n, improving this approximation. The excess over the boundary, R =S √nb(τ )=√n[W (τ ) b(τ )], n nτn− n n n − n playsanimportantroleinthisanalysis.Theexcessovertheboundaryalsoplaysacentral roleinnonlinearrenewaltheory,wherethe lawoflargenumbersdrivesthe leadingorder approximation.SeeWoodroofe[20]orSiegmund[16]fora discussionandapplicationsto sequentialanalysis.WiththeBrownianmotionscalingconsideredinthispaper,resultson improvedapproximations and the excess over the boundary are given by Siegmund [15], Siegmund and Yuh [17], Yuh [21] and Hogan [6–9]. Siegmund [16] suggests various ap- plications of this theory to sequential analysis; Broadie et al. [3], Broadie et al. [4] and Kou [12] use it to study options pricing; and Glasserman and Liu [5] consider its use for inventory control. With the exception of Hogan [6, 7], stopping boundaries in these papers are linear. To appreciate the role of the excess R in improving (2), note that if f(t,x) = n h(t)[x b(t)],then Ef(τ ,W (τ ))=Eh(τ )R /√n.Hogan[6]derivesthe limiting joint n n n n n − distributionfor R andτ ;they areasymptoticallyindependent, andthe limiting distri- n n bution for R has mean n ES2 ρ= T0 , (3) 2ES T0 where T is the ladder time 0 T =inf k>0: S 0 . 0 k { ≥ } Hogan’sargumentisquitedelicate.Itisbasedonconditioningonastoppingtimewitha boundaryjust slightly less than the boundary for τ .By contrast,the approachpursued n here is more global and analytic in character, but relies on smoothness of f and b to a greater extent. Formulas to calculate ρ numerically are given by Siegmund [16] and Keener [11]. Improving Brownian approximations 3 An important special case of (2) would be first passage probabilities, P(τ t). The n ≤ regularity conditions here require differentiable f, so this case is formally excluded (al- though our result would suggest an approximation). Refined approximations for these probabilitiesare alsosuggestedby Hogan[6],but his derivationis heuristic and assumes EX3=0. The limit in (2) can be found by solving the heat equation. To describe its relevance, let Y =Y(t,x) be a process starting at time t and position x given by Y =Y (t,x)=x+W(s t), s t; s s − ≥ let τ =τ(t,x) be stopping times given by τ =τ(t,x)=inf s t: Y b(s) ; s { ≥ ≥ } and define u(t,x)=Ef(τ,Y ), t 0,x b(t). τ ≥ ≤ Notingthatτ =τ(0,0)andW(τ )=Y ,thelimitEf(τ ,W(τ ))in(2)isu(0,0).By 0 0 τ(0,0) 0 0 the Feynman–Kac formula (Kac [10]), u satisfies the heat equation 1 u + u =0 t xx 2 in the region (t,x): t 0,x<b(t) , with boundary condition u(t,b(t))=f(t,b(t)). Fur- { ≥ } thermore, u is the unique solution in a suitable class of functions; see Krylov [13] or Bass[1].Inpractice,u(0,0)canbecomputedbynumericalsolutionoftheheatequation. In the sequel, continuity and differentiability of u will play an important role. Boundaryeffectsassociatedwiththeexcess R onlyarise(toordero(1/√n))whenf n x and u disagree along the boundary. Let ∆(t) denote the difference x ∆(t)=f (t,b(t)) u (t,b(t) ), t>0, x x − − and decompose f as the sum f +f with 0 1 f (t,x)=f(t,x) ∆(t)(x b(t)) 0 − − and f (t,x)=∆(t)(x b(t)). 1 − Since u and u agree with f and ∂f /∂x along the boundary, it seems appropriate to x 0 0 view u(0,0) as an approximationfor Ef (τ ,W (τ )). It is then natural and convenient 0 n n n to extend u above the boundary, defining u(t,x), x b(t); u¯(t,x)= ≤ (cid:26)f (t,x), x>b(t). 0 4 R. Keener With this convention, u¯ and u¯ are both continuous at the boundary. Note also that x 1 Ef (τ ,W (τ ))= ER ∆(τ ). 1 n n n n n √n Theorem 1.1. Assume: 1. The distribution of X is strongly non-lattice (or satisfies Cramer’s condition C), limsup EeitX <1, | | t | |→∞ and EX=0, EX2=1 and EX4< . ∞ 2. The stopping times τ , n 1, are uniformly integrable. n ≥ 3. The boundary function b has a bounded first derivative and b(0)>0. 4. The function f and its first and second order partial derivatives are bounded and continuous. 5. The functions u, u , u , u , u , u and u are bounded and continuous. x xx xxx xxxx t tt Then EX3 τ0 Ef(τ ,W (τ ))=Ef(τ ,W(τ ))+ E u (t,W(t))dt n n n 0 0 xxx 6√n Z 0 ρ + E∆(τ )+o(1/√n) 0 √n as n . →∞ The second assumption will hold if b(t)+ǫt for some ǫ>0. If b and f are →−∞ sufficiently smooth, then the final assumption follows from standard Ho¨lder estimates for solutions of parabolic differential equations; see, for instance, Problem 4.5 of Lieber- man [14]. The heat equation for u can be derived, at least informally, by conditioning a short time interval into the future. There is an analogous equation in discrete time. Define τ (t,x)=inf t+k/n: x+S /√n b(t+k/n),k=0,1,... n k { ≥ } and u (t,x)=Ef (τ (t,x),x+S /√n). n 0 n nτn(t,x) Conditioning on X , 1 f (t,x), x b(t); 0 u (t,x)= ≥ (4) n (cid:26)Eu (t+1/n,x+X/√n), x<b(t). n Unfortunately,withintegrationagainstthedistributionofX,thisconvolution-typeequa- tion is usually less tractable numerically than the heat equation. Theorem 1.1 evolved from my attempts to improve u¯ as an approximation for u by n imitating the matched asymptotic expansions used to study boundary effects in partial Improving Brownian approximations 5 differential equations. The method might also be viewed as martingale approximation, with bounds for potential or renewal measures playing a central role in the proofs. Tostudy the errorofu(0,0)=u¯(0,0)asanapproximationfor Ef (τ ,W (τ )), define 0 n n n functions Eu¯(t+1/n,x+X/√n) u¯(t,x), x<b(t); e (t,x)= − (5) n (cid:26)0, x b(t). ≥ Writing Ef (τ ,W (τ ))=Eu¯((τ ,W (τ )) 0 n n n n n n nτn−1 =u¯(0,0)+E [u¯((k+1)/n,S /√n) u¯(k/n,S /√n)] (6) k+1 k − Xk=0 nτn−1 =u¯(0,0)+E e (k/n,S /√n), n k Xk=0 a correction term for the approximation u¯(0,0) will be sought by approximating the expected sum in this equation. Details for this calculation are given in Section 2. The approximationfor Ef (τ ,W (τ )) is derived in Section 3. 1 n n n 2. An approximation for Ef (τ ,W (τ )) 0 n n n Lemma 2.1. Under the assumptions of Theorem 1.1, u¯ (t,x) EX3u¯ (t,x) Eu¯(t,x+X/√n)=u¯(t,x)+ xx + xxx 2n 6n√n (7) 1/n +O(1/n2)+O (cid:18)1+n[b(t) x]2(cid:19) − as n , uniformly for t 0, x<b(t). From this, →∞ ≥ EX3u¯ (t,x) 1/n e (t,x)= xxx +O(1/n2)+O n 6n√n (cid:18)1+n[b(t) x]2(cid:19) − as n , uniformly for t 0, x<b(t). →∞ ≥ Proof. By Taylor expansion of u, on x+X/√n b(t) we have { ≤ } Xu¯ (t,x) X2u¯ (t,x) u¯(t,x+X/√n)=u¯(t,x)+ x + xx √n 2n (8) X3u¯ (t,x) + xxx +O(X4/n2). 6n√n 6 R. Keener Lagrange’s formula for the remainder will involve u¯ at an intermediate value x xxxx ∗ between x and x+X/√n, and from this it is clear that this equation holds uniformly for t 0, x<b(t).Since u¯ (t,x) existsunless x=b(t) andis bounded,on x+X/√n> xx ≥ { b(t) , } Xu¯ (t,x) u¯(t,x+X/√n)=u¯(t,x)+ x +O(X2/n), (9) √n as n . Again, this will hold uniformly for t 0, x<b(t). Noting that →∞ ≥ X 3 X2 X4 | | + , n√n ≤ n n2 we can combine (8) and (9) to obtain Xu (t,x) X2u (t,x) X3u (t,x) u(t,x+X/√n)=u(t,x)+ x + xx + xxx √n 2n 6n√n +O(X4/n2)+O(X2/n)I X>√n(b(t) x) . { − } The first assertion (7) follows by integrating against the distribution of X, noting that EX4 E[X2;X>√n(b(t) x)] min ,EX2 − ≤ (cid:26)n[b(t) x]2 (cid:27) − 1+EX4 . ≤ 1+n[b(t) x]2 − def Here and in the sequel, E[Y;B] = E(Y1 ). B If x<b(t) and x<b(t+1/n), then, by (5) and (7), u¯ (t+1/n,x) EX3u¯ (t+1/n,x) xx xxx e (t,x)=u¯(t+1/n,x) u(t,x)+ + n − 2n 6n√n 1/n +O(1/n2)+O . (cid:18)1+n[b(t+1/n) x]2(cid:19) − In this case, the second assertion follows by the Taylor expansion 1 u¯(t+1/n,x)=u¯(t,x)+ u¯ (t,x)+O(1/n2) t n 1 =u¯(t,x) u¯ (t,x)+O(1/n2), xx − 2n and because 1+n[b(t) x]2 − 1+n[b(t+1/n) x]2 − Improving Brownian approximations 7 is uniformly bounded as b is bounded. If, instead, x b(t+1/n), but x<b(t), then ′ ≥ n[b(t) x]2 0 and n[b(t+1/n) x]2 0, and the asymptotic bound holds because u(t+1−/n,x)→ u(t,x)=O(1/n). − → (cid:3) − Define N =N (n)=# k<nτ : S >b d , d d n k k { − } the number of times the walk is within distance d of the boundary before stopping. The following result is essentially due to Hogan [6]. It slightly improves a bound given in the proof for Lemma 1.1 in his paper. Lemma 2.2. With the assumptions of Theorem 1.1, there exists a finite constant K 0 ≥ such that EN =K(1+d2), d for all n 1 and d>0. Also, if ≥ M (α)=# k αnτ : b S B , B n k k { ≤ − ∈ } then there exists a finite constant K>0 such that EM (α) KP(M (α) 1)(1+(supB)2), B B ≤ ≥ for all n 1, all α>0 and all B R. ≥ ⊂ Proof. Without loss of generality,let d be a positive integer. By the central limit theo- rem, P Sn2 >(1+ b′ )n γ>0, (10) { k k∞ }≥ for all n sufficiently large, say n n . Since the τ are uniformly integrable (by the 0 n ≥ secondassumptionof Theorem 1.1) and N nτ , we can assume that n d2 n.Define d n 0 ≤ ≤ N =# k m: k<nτ ,S >b d , m,d n k k { ≤ − } and let ν =inf m: N =j , j,d m,d { } so the jth time the walk is within d of the boundary happens on step ν . Note that j,d N j+n d2 implies the walk is below the boundary at time ν +n d2, that is, d 0 j,d 0 ≥ Sνj,d+n0d2 <bνj,d+n0d2, which, in turn, implies b Sνj,d+n0d2−Sνj,d <bνj,d+n0d2−bνj +d≤d+n0d2k√′kn∞ ≤√n0d(1+kb′k∞). 8 R. Keener But Sνj,d+n0d2−Sνj,d is independent of {Nd≥k}. So using this bound and (10), P(N j+n d2) P(N j)(1 γ). d 0 d ≥ ≤ ≥ − Iterating this, P(N 1+jn d2) d 0 ≥ =P(N 1+(j 1)n d2+n d2) d 0 0 ≥ − P(N 1+(j 1)n d2)(1 γ) P(N 1)(1 γ)j, j=0,1,.... d 0 d ≤ ≥ − − ≤···≤ ≥ − Hence ∞ EN = P(N x)dx d d Z ≥ 0 P(N 1) 1+ ∞(1 γ) (x 1)/(n0d2) dx d ⌊ − ⌋ ≤ ≥ (cid:20) Z − (cid:21) 1 n d2 0 =P(N 1) 1+ . d ≥ (cid:20) γ (cid:21) The proof of the bound for EM (α) is the same. (cid:3) B Corollary 2.3. Let c =c , k 0, n 1 be constants. Define k k,n ≥ ≥ Λ=sup(b c ), k k − k,n and let g be a non-negative function on ( ,Λ]. If Λ< , g < , and g(x) 0 as −∞ ∞ k k∞ ∞ → x , →−∞ 1 nτn−1 E f(S c ) 0, k k n − → kX=0 as n . If, in addition, g is non-decreasing, →∞ nτn−1 0 E g(S c ) K g(Λ)+2 xg(x+Λ)dx , k k − ≤ (cid:20) Z | | (cid:21) Xk=0 −∞ where K is the constant in Lemma 2.2. Whenthiscorollaryis usedlater, c willbe either b orb .When c =b ,Λ iszero, k k k+1 k k and when c =b , Λ b . k k+1 ′ ≤k k∞ Proof of Corollary 2.3. For the first assertion, for any d>0, g(S c ) g I S >b d + sup g(x). k k k k − ≤k k∞ { − } x ( ,Λ d] ∈ −∞ − Improving Brownian approximations 9 Summing over k and bounding the expectation using Lemma 2.2, 1 nτn−1 1 E g(S c ) g K(1+d2)+ sup g(x)Eτ , k k n n kX=0 − ≤ nk k∞ x∈(−∞,Λ−d] and the result follows because d can be arbitrarily large. In the second assertion, we can assume without loss of generality that g is right con- tinuous and write g(y)= I x y dg(x). Z { ≤ } By Fubini’s theorem and Lemma 2.2, nτn−1 nτn−1 E g(S c ) E g(S b +Λ) k k k k − ≤ − kX=0 Xk=0 E I x<S b +Λ,k<nτ dg(x) k k n ≤Z { − } kX0 ≥ = EN dg(x) Λ x Z − Λ K [1+(Λ x)2]dg(x) ≤ Z − −∞ 0 =K g(Λ)+2 xg(x+Λ)dx . (cid:3) (cid:20) Z | | (cid:21) −∞ ThesecondassertioninCorollary2.3isuselesswhentheintegralinthebounddiverges, but, in certain cases, it gives sharper results than the first assertion. The next corollary considers a specific function of interest later. Corollary 2.4. With the assumptions of Theorem 1.1, nτn−1 1 E =O(logn) 1+[b S ]2 Xk=0 k− k as n . →∞ Proof. If 0 b S √n, k k ≤ − ≤ 1 1 I b S <x<√n 2x k k = + { − } dx, 1+[b S ]2 1+n Z (1+x2)2 k k − 10 R. Keener and so nτn−1 1 √n 2xNx τ + dx. 1+[b S ]2 ≤ n Z (1+x2)2 Xk=0 k− k 0 Using Lemma 2.2, nτn−1 1 √n 2x E =Eτ +O(1) dx=O(logn). 1+[b S ]2 n Z 1+x2 (cid:3) Xk=0 k− k 0 The final corollary gives uniform integrability for moments of R . n Corollary 2.5. With the assumptions of Theorem 1.1, if E X p+2< , Rp, n 1, are | | ∞ n ≥ uniformly integrable. Proof. Conditioning on =σ(X ,...,X ), if c>0, k 1 k F E[Rp;R c]= E[(S +X b )p;k<nτ ,S +X b c] n n≥ k k+1− k+1 n k k+1− k+1≥ kX0 ≥ =E g(S b ), k k+1 − k<Xnτn where g(x)=E[(x+X)p;x+X c]. ≥ This function is increasing and right continuous. Taking Λ=sup (b b ) b , k,n k− k+1 ≤k ′k∞ by Fubini’s theorem, 0 xg(x+Λ)dx= E x(x+Λ+X)pI c X Λ x<0 dx Z | | − Z { − − ≤ } −∞ (X+Λ)p+2 cp+2 cp+1(X+Λ c) =E − + − ;X+Λ c . (cid:20) (p+1)(p+2) p+1 ≥ (cid:21) This expectation tends to zero as c by dominated convergence, as does g(Λ), and uniform integrability follows from th→e b∞ound in Corollary 2.3. (cid:3) Theorem 2.6. Under the assumptions of Theorem 1.1, EX3 τ0 Ef (τ ,W (τ ))=u¯(0,0)+ E u¯ (t,W(t))dt+o(1/√n). 0 n n n xxx 6√n Z 0 Proof. Because 1 n(T∧τn)−1 T∧τn u¯ (k/n,S /√n)= u¯ ( nt /n,W ( nt /n))dt, xxx k xxx n n Z ⌊ ⌋ ⌊ ⌋ kX=0 0

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.