ebook img

Jump Locations of Jump-Diffusion Processes with State-Dependent Rates PDF

0.52 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Jump Locations of Jump-Diffusion Processes with State-Dependent Rates

Jump Locations of Jump-Diffusion Processes with State-Dependent Rates Christopher E. Miles∗ and James P. Keener Department of Mathematics, University of Utah 155 South 1400 East, Salt Lake City, Utah 84112, USA (Dated: January 10, 2017) We propose a general framework for studying jump-diffusion systems driven by both Gaussian noiseandajumpprocesswithstate-dependentintensity. Ofparticularnaturalinterestarethejump locations: thesystemevaluatedatthejumptimes. However,thestate-dependenceofthejumprate provides direct coupling between the diffusion and jump components, making disentangling the two to study individually difficult. We provide an iterative map formulation of the sequence of 7 distributions of jump locations. Computation of these distributions allows for the extraction of the 1 interjump time statistics. These quantities reveal a relationship between the long-time distribution 0 of jump location and the stationary density of the full process. We provide a few examples to 2 demonstrate the analytical and numerical tools stemming from the results proposed in the paper, n including an application that shows a non-monotonic dependence on the strength of diffusion. a J 8 I. INTRODUCTION modifying the state, which jump-diffusion allows. Diffu- sion with switching, which bares resemblance to jump- h] Stochastic models driven by both Gaussian noise (dif- diffusion, has also been studied [18] but is distinctly dif- c fusion)andadiscretejumpcomponent,orsocalledjump- ferent in the role of discrete noise and not addressed in e diffusion models have been used in a wide variety of ap- this work. m plications. Perhapsmostcommonly,thesejump-diffusion It is worth mentioning the issue of blowup in these - models are used in mathematical finance [2–5] to de- models. That is, for some choices of the diffusion and t a scribe frequent small transactions with occasional larger jump components, the system will experience an infinite st ones. Jump-diffusiondescriptionshavealsoserveduseful number of jumps or blowup (or both) in finite time, or . in neuroscience, typically describing the noisy buildup is said to be unstable. Although thoroughly studied[19– t a of voltage in a neuron leading to stochastic firing [6– 21] and shown subtle even for relatively simple models, m 9]. Other applications include biophsyical descriptions stability is deliberately not focused upon in this work. - of movement of chromosomes [10] and the interaction However, in the appendix of this work, we present some d between soil moisture and rainfall events [11]. Also rele- thoughtsonhowtheproposedframeworkcouldprovidea n vant to the results described in this paper is a subset of unique perspective in studying these phenomena, as this o c jump-diffusion models that neglect Gaussian noise and is primarily a partial differential equations (PDEs) for- [ are driven by the dynamics of the jump process alone. mulation rather than the existing path-wise approaches. This type of models appears in an equally eclectic vari- If a jump process is state-dependent, then the distri- 1 v ety of applications [12–15]. bution (or statistics) of the jump locations can be stud- 0 Of particular interest in this work are jump-diffusion ied. That is, we refer to the locations as the state of the 2 processeswithstate-dependentjumprates,whichappear system evaluated at the jump times. For instance, this 9 in some previous work [13, 14, 16]. This is a natural coulddescribethestatisticsofthevoltageatwhichaneu- 1 generalization, as any realistic system is unlikely to have ron fires or the price at which a jump occurs in finance. 0 a constant jump rate, but rather one that depends on This seemingly natural quantity of interest is seemingly . 1 thecurrentstateofthesystem. Forinstance,aneuronis unstudied and the main focus of this work. Although 0 more likely to fire as the voltage increases or, in finance, efficient Monte Carlo methods exist for jump-diffusion 7 an asset to crash as it rises in price. and L´evy processes [22, 23], the jump component may 1 Previous work has thoroughly investigated jump- be rare and therefore difficult to find an accurate distri- : v diffusion models for constant jump rates [1, 11]. The butionof. AfullPDEdescriptionoftheprocesscanalso Xi interjump time distributions for discrete noise alone has becomputed, butitisseeminglydifficult(orimpossible) alsobeenstudied[13,16]primarilyusingrenewaltheory. to disentangle the jump component alone. r a However, in general, state-dependent jump-diffusion is In this work, we present a general (accommodating a not arenewalprocess,assequentialdistributionsarenot wide variety of models) framework for studying jump- independent. The theory of Cox processes [17] is also diffusion systems and particularly focus on studying the well-established and refers to a diffusion process driv- sequenceofjumplocations. Wepresentaniterativemap ing a state-dependent Poisson process, but this does not formulation explicitly describing the distribution of the complete the feedback loop of the jump process further ith jump location, which allows for statistics to be ex- tracted about the interjump times even with diffusion, a limitation of previous works [13]. By assuming the processreachesstationarity,wefindidentifytermbehav- ∗ [email protected] ior of the distribution of jump locations and statistics 2 of the interjump times. The stationary jump locations B. Diffusion Component turn out to be closely related to the full stationary pro- cess, being equal if and only if the system is trivially Consider the classical one-dimensional SDE driven by state dependent. A few examples are provided, illustrat- Brownian motion described by ing the usefulness of the theory toward analytical and (cid:112) numerical results. One example provides an interesting dY =A(Y )+ 2D(Y )dW , (2) t t T t non-monotonic effect of diffusion on the interjump time statistics in a particular model. where W is a Brownian motion. The Fokker-Planck t (forwardChapman-Kolmogorov)equationdescribingthe evolution of the probability density function p(y,t) cor- responding to the path-wise description (2) is II. FORMULATION ∂ p(y,t)=Lp:=−∂ {A(y)p}+D(y)∂ p. t y yy WeusetheFokker-Planck operator Ltocharacterizethe A. Jump-Diffusion diffusion process for the remainder of the paper. Let X denote a state-dependent jump-diffusion pro- t cess and p˜(x,t) be the probability density (PDF) of this C. Jump Component process. The evolution of the density is described by the forward Champman-Kolmorgov equation [24, 25] LetZ beaninhomogeneous(state-dependent)Poisson t process with intensity (rate) λ(X ). Also necessary is a t description of what actually occurs at the jumps, some- ∂ p˜(x,t)=Lp˜−λ(x)p˜+Jλ(x)p˜, (1) t times referred to as the reset map [20]. The behavior of the reset map is characterized by the jump operator J, where the operator L characterizes the diffusion compo- which is a probability density flux, ensuring that (1) in- nent, λ(x) is the state-dependent intensity at which the deeddescribestheevolutionofaprobabilitydensity. We jump process occurs, and the operator J describes the now describe a few possible choices of this operator. jump. Afeatureweseektoemphasizeisthestatedepen- dence of the jump-rate λ(x), resulting in direct coupling between the diffusion and jump components of the pro- 1. Jump Operator Examples cess. 1. Constantjumpsize. Atthejumptimes,theprocess increments by a fixed amount ∆, so 0.8 10 X →X+∆. 0.6 8 ] 1− Then, the corresponding jump operator J is then st] 0.4 me a shift by that fixed quantity X [dit 0.2 46 X) [tit Jp:=p(x−∆). ( λ 0.0 2 2. Reset. Fundamentally different than the previous example, rather than jumping by a fixed displace- 0.2 0 ment,theprocessresetstoadeterministicquantity, 0.0 0.2 0.4 0.6 0.8 1.0 η, so t [time] X →η. FIG. 1. An example realization of a jump-diffusion process X with Lp = ∂ {axp}+Dp , λ(x) = αexp(cid:8)−x2/β(cid:9) and t x xx The corresponding J operator is Jp=p(x−∆). Alsoshownisthestatedependentjumprate λ(x). Inthisparticularrealization,3jumpsoccur. Parameter (cid:90) ∞ values used a=.1,D=1,α=10,β =10,∆=1. Jp:=δ(x−η) p(x,t)dx, −∞ whereδ(x)istheDiracdeltafunction,againnoting Although the particular change choices of L and J that the scaling ensures flux preservation. significantly change the behavior and characteristics of the process, the description (1) is flexible enough to ac- 3. Random jump size. A generalization of the first commodate a wide variety of models, which we will now example, the particle can also jump a random dis- elaborate upon. placement, where the size of the jump is described 3 by the probability density µ(∆,x), which can also which means the first jump time density p (t) is then τ be state-dependent. The J operator is ds (cid:90) ∞ (cid:90) ∞ pτ(t)=−dt =− ∂tpdx. Jp:= p(x−∆,t)µ(∆,x)d∆. −∞ −∞ Then, using (3) and the fact that p→0 as x→±∞, we obtain 4. Other examples. Although not directly considered in this paper, reset maps of the form (cid:90) ∞ (cid:90) ∞ p (t)= λpdx= ∂ qdx. τ t X →γX =⇒ Jp:=p(γx) ∞ ∞ Duetothelackofspatialfluxinq(x,t),thedensityofexit have been studied in other work [20] and can be locations is q(x,∞), which can obtained by integrating formulatedinourframeworkinthedescribedman- over all possible jump times ner. A generalization of the reset case could also be made to allow for resets to a random location, (cid:90) ∞ p (x)= ∂ qdt=q(x,∞)−q(x,0)=q(x,∞) but this does not seem to provide additionally in- j t 0 teresting structure. by noting that q(x,0)≡0. It is worth noting that in the reset case, the system is indeed a renewal process, as the distributions between Although this provides useful information about the jumps are entirely independent, meaning that the tools distributions (in time and space) at which a particular of [11, 13, 16] can be utilized. This reset case has also (thefirst)jumpoccurs, weareinterestedinstudyingthe been studied more explicitly in [26, 27]. distributions of all jumps. III. RESULTS B. Jump Location Sequential Mapping A. Jump Distributions We instead study the interjump dynamics by noting thatbetweenjumps,thefullprocess(1)canbedescribed by the absorbing process (3). Define t to be the ith A more convenient process to study is what we refer i jump time and let X := X be the ith jump location. to as the absorbing process, described by i ti Consider the iterative definition of p (x) for i=2,3,... i (cid:40) ∂ p(x,t)=Lp−λp  ∂tq(x,t)=λp. (3) ∂tpˆi(x,t)=Lpˆi−λpˆi t pˆ(x,0)=Jp (4) i i−1 p (x)=(cid:82)∞λpˆ dt. Notethatwedistinguishbetweenthisandthefullprocess i+1 0 i p˜(x,t),whichwerefertohenceforthasthereinjectedpro- Thereisaslightclumsinesswiththefirstiterationofthis cess. Whiletheabsorbingprocessdoesnotcaptureallof procedure. Let p be some known starting distribution the behavior of the full reinjected process, the absorbing 0 of the process, then the first iterate becomes process fixes the distribution (in both space and time) of jump events and consequently serves more fruitful in  ∂ pˆ (x,t)=Lpˆ −λpˆ disentangling the jump component from the full process.  t 1 1 1 pˆ (x,0)=p 1 0 Theorem 1. The densities of the first jump location p (x)=(cid:82)∞λpˆ dt, 1 0 1 p (x) and first jump time p (t) of (3) are described by j τ noting the difference in the starting conditions. (cid:90) ∞ p (x)=q(x,∞)= λ(x)p(x,t)dt j Theorem 2. The distribution of the ith jump location 0 X is p (x) and is described iteratively by (4). ti i and ThisconstructionfollowsfromdirectlyfromTheorem (cid:90) ∞ 1. Effectively,pˆ servesasintermediatequantitytracking i p (t)= λ(x)p(x,t)dx. τ thedistributionofallpossiblejumplocations(andtimes) −∞ for that particular iterate. Then, to start the next iter- Proof. Define the survival probability s(t) be the proba- ate, the distribution of jump locations must be modified bility that the first jump has not occurred by time τ jump procedure characterized by J. Although tracking the jump locations rather than the jump times seems (cid:90) ∞ counterintuitive at first, no natural analogous formula- s(t):=P[τ >t]= p(x,t)dx, tion is apparent. −∞ 4 Define a more convenient quantity to study Elaborating a bit on this relationship: while (9) and (8)mayappeartoproducethesameresult,thesolutions u :=p (x)/λ(x). (5) i i toeachrequiredifferentscalings. Sincepˆ isaprobability s If λ(x) = 0 for some x, then necessarily p (x) = 0 since density, it must be that i λ(x) = 0 implies no jump can occur at this location, (cid:90) ∞ hence this quotient causes no difficulties. pˆ dx=1. (10) s Theorem 3. The description (4) is equivalent to −∞ Tui+1 :=[λ−L]ui+1 =Jλui, However, u(cid:63)λ=p(cid:63), where p(cid:63) is a probability density, so this means that or more explicitly (cid:90) ∞ ui+1 =T−1Jλui. (6) u(cid:63)(x)λ(x)dx=1. (11) −∞ Proof. Integratingbothsidesof(4)withrespecttotand noting that pˆ(x,∞)=0 we are left with i C. Moments (cid:90) ∞ (cid:90) ∞ pˆ(x,0)= Lpˆ dt−λ(x) pˆ. i i i 0 0 Define τ to be the ith interjump time. That is, i The linear operator L is a differential operator in x and τ := t −t . Also, let τ be the interjump time for i i i−1 (cid:63) consequently commutes with the time integral. Using the stationary process. (cid:82)∞ the initial condition and p /λ = pˆ dt, we obtain the i 0 i Theorem 5. The mean interjump time τ can be recov- desired result. i ered from the distribution of the ith jump location and The map (6) provides an explicit description for the is sequence (cid:90) ∞ (cid:90) ∞ p (x) u1(x)→u2(x)→···→u(cid:63)(x)→u(cid:63)(x)→··· , (7) (cid:104)τi(cid:105)= uidx= λi(x) dx. (12) −∞ −∞ where, we are assuming the process reaches stationarity. Also, the mean stationary interjump time can be com- Convergence of the sequence (7) corresponds to a fixed puted from the stationary distribution of the jump loca- point of the map (6). Although we seek not to focus on tions p this aspect, a natural question to ask is under what con- (cid:63) ditions the process reaches stationarity, or equivalently, (cid:90) ∞ (cid:90) ∞ p (x) when does u converge to some u . In Appendix B, we (cid:104)τ (cid:105)= u dx= (cid:63) dx. (13) i (cid:63) (cid:63) (cid:63) λ(x) provide a brief commentary about how the relationship −∞ −∞ (6) can be thought of as an iterated linear non-negative Proof. We integrate both sides of (4) with respect to x, integral operator which can be studied as such. Results again noting that pˆ →0 as x→±∞, resulting in from theoretical ecology literature are then cited which i provideaheuristicanalysisofconditionsforconvergence. (cid:90) ∞ (cid:90) ∞ Proceeding assuming the iterations converge to sta- ∂t pˆidx=− λpˆidx. tionarity,wecanassumethat(6)hasafixedpoint,which −∞ −∞ must be of the following form. However, from Theorem 1, we see that the right-hand Theorem 4. The stationary distribution of the jump lo- sideisexactlythedistributionoftheinterjumptimepτi, cations p (x) is described by so we have (cid:63) (cid:90) ∞ 0=Lu −λu +Jλu , (8) (cid:63) (cid:63) (cid:63) ∂ pˆ dx=−p (t) (14) t i τi where u :=p /λ. −∞ (cid:63) (cid:63) Corollary 1. The stationary distribution of the jump Taking the mean on both sides with respect to τi, locations p is the same as the stationary distribution of (cid:63) (cid:90) ∞(cid:90) ∞ the full process if and only if λ is constant. (cid:104)τ (cid:105)=− t∂ pˆ dxdt, i t i Proof. This is an immediate consequence of taking the 0 −∞ full process to be in stationarity, so dpˆ/dt = 0 in (1), (cid:82)∞ after integrating by parts and noting that pˆ dt = which satisfies 0 i p /λ, we get the desired result. i 0=Lpˆ −λpˆ +Jλpˆ . (9) s s s While the first moment (mean) of the interjump time This is exactly the same relationship as (8). Recalling can be computed directly with a quadrature of the pre- that λu = p and that both p and pˆ are both prob- sumed known p , the higher order moments are less (cid:63) (cid:63) (cid:63) s i ability densities, the only way that p = pˆ is if u is a straightforward. Appendix A describes how, in theory, (cid:63) s (cid:63) rescaling of p or, equivalently, λ is constant. knowledge of the p can be used to extract higher order (cid:63) i 5 moments of τ . Solving for higher order moments using where I,K are Bessel functions [28] such that I → 0 i this approach requires solving a hierarchy of differential as x → −∞ and K → 0 as x → ∞ (and are linearly equations, meaning in practicality, this may not be so independent). The jump condition (18) then becomes feasible. However, a more practical numerical approach may be to solve for pi and then solve the absorbing (3) D{c2u(cid:48)R(0)−c1u(cid:48)L(0)}=−1. (19) process run to extract explicitly p using the relation- τi However, we also need to impose continuity of u , so we ships from Theorem 1. (cid:63) also have the requirement c u (0)=c u (0). (20) IV. EXAMPLES 1 L 2 R Theconditions(19),(20)provideustwoequationsfortwo In this section, we briefly provide some examples to il- unknowns, which yield lustrate the usefulness of the proposed results. The first, a relatively simple case demonstrates the ability to de- 2 √ 2 √ c = 2K (2/ D), c = 2I (2/ D). rive analytical quantities for models which were difficult 1 D a/D 2 D a/D tostudypreviously. Thesecondexampleprovidesabrief Thus far, our solution is then defined only up to a con- analysis identifying that certain models may not be fea- stant, butweknowthatu mustsatisfythescaling(11), sible to study with Fourier analysis directly but become (cid:63) which our choice of c ,c serendipitously already satisfy. so when formulated in our framework. The third and 1 2 final example, which is studied numerically from the re- 0.5 0.7 lationships proposed in this work, provides interesting p (x) pˆ (x) s interactions between the state-dependence of the jump 0.4 Xti 0.6 Xt process and the strength of diffusion. 0.5 y y nc0.3 nc0.4 e e u u A. Constant drift with reset to origin freq0.2 freq0.3 0.2 0.1 0.1 Theresetcasehasbeenstudiedinotherworks[26,27], butwillserveheretoillustratetheabilitytoderiveexact 0.0 0.0 4 2 0 2 4 6 4 2 0 2 4 quantities utilizing the proposed framework. Consider a x [dist] x [dist] jump-diffusion model with constant drift and diffusion withjumpcomponenttoberesettozerowithrateλ(x)= FIG. 2. A comparison of Monte Carlo simulations of the ex- exp{x}. Note that the reset structure means that the ampledescribedby(15)andanalyticallyderivedcorrespond- ing quantities. Left: The stationary jump location density processisarenewalprocess,andconsequentlyp =p for i (cid:63) p (x)iscomparedwithahistogramofrealizationsofthestate all i. That is, the process is in stationarity after a single (cid:63) oftheprocessX evaluatedatthejumptimest . Right: The jump. Then, in stationarity, the fundamental relation in t i derived stationary distribution of the full process pˆ is com- this work (8) becomes s pared with the histogram of a realization of the process X . t (cid:90) ∞ Parameter values used are a=1,D=1. 0=−∂ {au }+D∂ xu −exu +δ(x) exu dx. x (cid:63) x (cid:63) (cid:63) (cid:63) −∞ By computing u explicitly, we can use (5) to immedi- (15) (cid:63) ately obtain the distribution of jump locations p . From From (11), the integral scaling the δ function is simply (cid:63) (13),wecouldalso,inprinciple,computestatisticsabout equal to 1, so the interjump times. Finally, from Corollary 1, we also δ(x)=−∂ {au }+D∂ xu −exu . (16) have the stationary density of the full process, pˆ , which x (cid:63) x (cid:63) (cid:63) s is a rescaling of q such that (10) is satisfied. We seek to This can be computed in a similar manner to a Green’s (cid:63) emphasizethatbeingabletocomputethedistributionof function, noting that for x(cid:54)=0, we have stationary jump locations or the stationary density itself 0=−∂ {au }+D∂ xu −exu , (17) (either analytically as done in this example or numeri- x (cid:63) x (cid:63) (cid:63) cally) allows for the extraction of the other, illustrating and integrating (16) from (−ε,ε) and using the fact that the usefulness and practicality of our results. u must be continuous, we get the matching condition (cid:63) In Figure 2, a comparison is shown between Monte D(cid:8)u(cid:48)(0+)−u(cid:48)(0−)(cid:9)=−1. (18) CarlosimulationsoftheprocessXtcorrespondingto(15) and the analytically derived distributions in this section. By solving (17), we get that our solution is of the form Specifically,anEuler-Maruyama[29]schemecorrespond-  (cid:16) √ (cid:17) ingtothepath-wisedescriptionwassimulated,recording c1uL(x):=c1ea2xbIa/b 2 bbex x<0 all of the values on the realization of the path as well as u(cid:63) =c2uR(x):=c2ea2xbK−a/b(cid:16)2√bbex(cid:17) x>0, tluhsetrvaatleusesthoaftXbotthatththeeanjuamlyptictaimllyesdetir.iveTdheexpfirgeusrseionils- 6 for the stationary jump locations p (x) and the station- partnershipwithFourieranalysis. Forthisexample,take (cid:63) ary distribution of the full process pˆ (x) agree with the aconstantdriftanddiffusionwithrandomjumpsizedis- s Monte Carlo simulations. tribution µ(∆). Then, in stationarity, (8) becomes B. Constant drift with random jump size Thisexampleispresentedonlybrieflytoillustratethat this framework can provide some insight to problems in (cid:90) ∞ 0=−∂ {au }+D∂ u −λ(x)u + µ(∆)λ(x−∆)u (x−∆)d∆. (21) x (cid:63) xx (cid:63) (cid:63) (cid:63) −∞ One possible quantity of interest is the first moment of However, recall from (12) that U(0) = (cid:104)τ (cid:105), the mean (cid:63) jump locations, so interjump time, so (cid:90) ∞ (cid:90) ∞ (cid:104)µ(cid:105) (cid:104)j(cid:63)(cid:105)= xp(cid:63)dx= xu(cid:63)λdx. (cid:104)τ(cid:63)(cid:105)=− a . (24) −∞ −∞ This result is quite interesting for two reasons. For one, Althoughλ(x)isleftarbitrary,theFouriertransformcan we have an explicit expression for the mean interjump surprisingly be used to gain some insight. Define the time revealed by the Fourier transform, independent of Fourier transform of u(cid:63) to be the choice of λ(x). More interestingly is the sign of the (cid:90) ∞ quantity. Since (24) represents an interjump time, it U(k):=F[u ]= e−ikxu (x)dx. must be non-negative, meaning this quantity only exists (cid:63) (cid:63) −∞ if a,(cid:104)µ(cid:105) differ in sign. This is intuitive as the drift and jump process must oppose each other to have a chance Also define the transformed quantities of reaching stationarity, which this demonstrates. Λ(k):=F[λu ], M(k):=F[µ] (cid:63) C. Ornstein-Uhlenbeck with Gaussian jump rate from which, we note that Λ(cid:48)(0)=−i(cid:104)j(cid:105), M(cid:48)(0)=−i(cid:104)µ(cid:105). This example serves to identify curious subtle be- haviors of the coupling between diffusion and state- Taking the transform of (21), dependent jumps illuminated by our results. Consider, for this example, an Ornstein-Uhlenbeck diffusion with 0=−aikU −Dk2U −Λ(k)+M(k)Λ(k), (22) drift −ax and diffusion coefficient D. The jump rate is taken to be Gaussian which, evaluated at k =0 yields (cid:114) α2β λ(x)= exp{−βx2/2} (25) [1−M(0)]Λ(0)=0. (23) 2π However,weknowµ(∆)andu λ=p arebothprobabil- and the jump is a fixed distance ∆. This could perhaps (cid:63) (cid:63) ity densities and consequently M(0) = 1 and Λ(0) = 1. be thought of as a molecular motor [30] attached via a Thus, (23) is trivially satisfied. This is not seemingly Hookean linker, causing a relaxation to an un-stretched useful on its own, but we can couple the higher order x = 0 state. As the motor becomes less stretched, it moments by taking a k derivative of (22) to yield preferentially takes a step of size ∆ at a faster rate. Of possible interest would be to study how the cargo diffu- 0=−aikU(cid:48)−aiU −2DkU −2k2U(cid:48)−Λ(cid:48)+M(cid:48)Λ+MΛ(cid:48), sion coefficient D effects the effective rate at which the motor steps. The resulting stationary relationship (8) which, at k =0 and using M(0)=1,Λ(0)=1 yields becomes 0=−aiU −Λ(cid:48)(0)+M(cid:48)(0)+Λ(cid:48)(0), 0=∂x{axu(cid:63)}+D∂xxu(cid:63)−λ(x)u(cid:63)+λ(x−∆)u(cid:63)(x−∆). (26) which says that necessarily Theredoesnotappeartobeafruitfulapproachtostudy- ing(26)analytically. However,wesolveitnumericallyus- M(cid:48)(0)=−i(cid:104)µ(cid:105)=−aiU(0) ing a Crank-Nicolson upwind scheme [31]. From this, u (cid:63) 7 andconsequentlyp isobtained. Thefirstmomentofthe V. CONCLUSION (cid:63) interjumptime(cid:104)τ (cid:105)isimmediateafteraquadrature(13), (cid:63) but higher moments require solving differential equation Inthiswork,wehavepresentedamathematicalframe- solving, asdescribedby Appendix A.Insteadoftaking work for studying jump-diffusion systems with state- thisapproach,oncep isobtained,Jp isusedasastart- (cid:63) (cid:63) dependentjumprates. Theformulationisflexibleenough ingconditionfortheabsorbingprocess3,whichissolved to accommodate a variety of behaviors, providing rele- numerically using the same PDE solver. The advantage vance to a wide range of applications. The particular ofthisisthatthefulldistributionofτ isobtainedusing (cid:63) objectsoffocusinthisworkareisthedistributionofthe the relationship described by Theorem 1. jump locations: the values of X at the jump times t . t i We first reformulate the full process into another that isequivalentbetweenjumps. Usingthisformulation, the x10-1 explicit distribution of the jump locations can be ex- tracted and related to the previous jump location. This relation allows us to derive an iterated map description of the sequence of jump locations. With explicit knowl- edge of the distribution of jump locations, moments of the interjump times can be computed in more generality than previous work [13]. Assuming the process reaches stationarity, the distribution of the full process and the stationaryjumplocationsareshowntobecloselyrelated. A brief discussion of possible conditions on convergence x10-2 tostationarityinthelensofthisframeworkareprovided, utilizing tools from theoretical ecology motivated by the observation that the iterated map is effectively a non- negative integral linear operator. Three examples of particular choices of the diffusion and jump components are briefly discussed. The first showsthatmodelswithrelativelysimplefunctionalforms can be solved exactly using results from our work and likely would not be possible or difficult to derive other- wise. The second example shows that although studying full jump-diffusion models directly using Fourier space FIG.3. Thefirstandsecondmomentsofthestationaryjump may be infeasible, translating the problem to our pro- locations j and interjump times τ of the process described posed framework and then studying via the Fourier (cid:63) (cid:63) by (25),(26) as the strength of diffusion is varied, computed transform may serve useful. The last example is com- by solving (26) numerically. A non-monotonic dependence puted numerically from results derived in the paper and on diffusion is seen in the mean interjump time. Parameter shows that diffusion can have a non-monotonic influence values used are α=1,β =100. on when coupled with state-dependent jump processes, showing that our results are useful in finding subtle be- haviors even when only numerical approaches are feasi- The results of this procedure as a function of the dif- ble. fusionstrengthD areshowninFigure3. Plottedarethe first and second moments of the stationary jump loca- tions j and interjump times τ . A noteworthy behav- (cid:63) (cid:63) ior is observed: for certain parameter values, the mean VI. ACKNOWLEDGEMENTS interjump time is a non-monotonic function of the dif- fusion strength, D. Although this is perhaps counterin- This research was partially supported by NSF grant tuitive, discrete events with non-monotonic dependence DMS 1122297 and DMS-RTG 1148230. on the strength of diffusion have been observed in other biophysical phenomena [32]. The interjump times seem to change significantly for different values of the drift, the jump locations seem to change relatively little. In- Appendix A: Higher order interjump time moments tuitively, the variance of both quantities increases with diffusion strength and lower drift. Intheory,higherordermomentsoftheinterjumptimes This example provides an illustration that even when canbecomputedwithknowledgeofthesequenceofjump analyticalsolutionscannotbefound,theframeworkpro- distributions pi as was discussed with the first moment vides relationships that can be computed numerically to in (12). illuminatenewbehaviorsnotidentifiedinpreviouswork. To illustrate this, consider (14) and take the second 8 moment with respect to τ on both sides, resulting in Now,iterationsofourmapcorrespondtoiteratingthe i integral operator A with kernel G˜. This formulation is (cid:90) ∞ (cid:90) ∞ exactly that of the so-called integral projection models (cid:104)τ2(cid:105)=− t2∂ pˆ dtdx. i t i (IPM) in theoretical ecology. [33] Stability results from −∞ 0 the IPM literature can be used to understand the con- (cid:82)∞ vergence of our iterative procedure. Again, noting that u = p /λ = pˆ dt, after integrat- i i 0 i The main result comes from [34] and is effectively a ing by parts, we get statement of the Krein-Rutman theorem [35, 36], the in- (cid:90) ∞ finite dimensional analog of the Perron-Frobenius theo- (cid:104)τ2(cid:105)=− v dx, rem. [25, 37] i i −∞ Although L1 seems like the natural function space to study the spectral properties of these operators, since which looks similar to (12), however, vi satisfies they must preserve probability, L2 turns out to be far more accessible due to issues establishing compactness [λ−L]vi =ui. (A1) inL1. InAppendixCof[33],theauthorsprovideamore thorough commentary on these complications. In L2, we Thus, computation of the second moment requires solv- have the property ing the differential relationship (A1). Continuing to Theorem B.1 ([36]). If higher order moments using same approach yields a fur- therhierarchycoupledinadifferentialmanner. Interest- (cid:90)(cid:90) (cid:12) (cid:12)2 ingly, the differential operator (left hand side) of (A1) is (cid:12)G˜(x,ξ)(cid:12) dxdξ <∞, (cid:12) (cid:12) exactlyT,thesameas(6),meaningtheGreen’sfunction R2 (effectively T−1, discussed more in Appendix B) could then the integral operator with kernel G˜ maps from L2 to perhaps be reused to compute these quantities. L2. We then cite the main theorem which establishes the existence of a dominant eigenvalue for A. Appendix B: Spectral properties of iterated map Theorem B.2 (Easterling1998,[34]). SupposethatG˜ ∈ Note that (6) involves the inverse of a differential lin- L2 andisnon-negative, thenifthereexistsanα>0,β > ear operator T, which is exactly determined by its cor- 0,u such that 0 responding Green’s function. Let G(x,ξ) be the corre- sponding Green’s function to T, meaning that α(x)u (ξ)≤G˜(x,ξ)≤β(x)u (ξ) 0 0 for all x,ξ, then the integral operator A has a dominant T G(x,ξ)=δ(x−ξ). x eigenvalue with associated eigenfunction. Then, (6) can be rewritten as This establishes conditions for the existence of a dom- inant eigenvalue and eigenvector, which establish the (cid:90) ∞ long-term behavior. u = G(x,ξ)Jλu (ξ)dξ. i+1 i −∞ Theorem B.3 (Easterling1998,[34]). Assuming that A satisfies the previous condition, then the stationary dis- After a change of variables, define G˜ by tribution u is given by the eigenfunction φ associated (cid:63) 1 with the dominant eigenvalue λ , (cid:90) ∞ (cid:90) ∞ 1 u = G˜(x,ζ)u (ζ)dζ := G(x,ξ)Jλu (ξ)dξ. i+1 i i u −∞ −∞ lim i =κφ , i→∞λi 1 Forexample,forfixedjumpsizes: Jp=p(x−∆),then 1 where κ is a scaling parameter. G˜(x,ζ)=G(x,ζ+∆)λ(ζ+∆). The previous theorem suggests that the sequence u i converges if and only if |λ | = 1. Thus, this summary Abbreviate this linear non-negative integral operator 1 providesaroughheuristicandnovelangleindetermining (cid:90) ∞ whether the sequence ui (and the full process) approach Aq := G˜(x,ζ)q(ζ)dζ. stationarity, although this is not the focus of this work. −∞ [1] F. Hanson, Stochastic Processes and Control for Jump- (SIAM, Philadelphia, 2007). Diffusions, Advances in Design and Control, Vol. 13 9 [2] G.YanandF.B.Hanson,Am.ControlConf.2006,2989 arXiv:1303.6999v1. (2006). [22] Y. Xia and M. B. Giles, in Springer Proc. Math. Stat., [3] S. G. Kou, Manage. Sci. 48, 1086 (2002). Vol. 23 (2012) pp. 695–708. [4] S.G.Kou,inHandbooksOper.Res.Manag.Sci.,Vol.15 [23] P. Glasserman and N. Merener, Proc. R. Soc. A Math. (2007) pp. 73–116. Phys. Eng. Sci. 460, 111 (2004). [5] J. Figueroa-Lo´pez, Handb. Comput. Financ., edited by [24] C. Gardiner, Stochastic Methods: A Handbook for the J.-C. Duan, W. K. Ha¨rdle, and J. E. Gentle (Springer Natural and Social Sciences, 4th ed., Springer Series in Berlin Heidelberg, Berlin, Heidelberg, 2012) pp. 1–28. Synergetics, Vol. 13 (Springer Berlin Heidelberg, 2009) [6] W.GerstnerandW.M.Kistler,Spiking Neuron Models: p. 447. Single Neurons, Populations, Plasticity (2002) p. 494. [25] P. C. Bressloff, Stochastic Processes in Cell Biology, In- [7] L. Sacerdote and M. T. Giraudo, in Stoch. Biomath. terdisciplinary Applied Mathematics, Vol. 41 (Springer Model., Lecture Notes in Mathematics, Vol. 2058 International Publishing, 2014) p. 679. (Springer Berlin Heidelberg, 2013) pp. 99–148, [26] M.R.EvansandS.N.Majumdar,Phys.Rev.Lett.106, arXiv:1101.5539v1. 1 (2011), arXiv:1102.2704. [8] B. Lindner, B. Doiron, and A. Longtin, Phys. Rev. E [27] J.M.Meylahn,S.Sabhapandit, andH.Touchette,Phys. 72, 061919 (2005). Rev. E 92, 1 (2015), arXiv:1510.02431. [9] V.DiMaio,P.La´nsky´, andR.Rodriguez,Gen.Physiol. [28] M. Abramowitz and I. A. Stegun, Handbook of Mathe- Biophys. 23, 21 (2004). maticalFunctions,DoverBooksonMathematics,Vol.55 [10] B. Shtylla and J. P. Keener, J. Theor. Biol. 263, 455 (Dover Publications, 1966). (2010). [29] P. P. E. Kloeden and E. Platen, Stochastics An Int. J. [11] E. Daly and A. Porporato, Phys. Rev. E 73, 026108 Probab. Stoch. Process., Stochastic Modelling and Ap- (2006). plied Probability, Vol. 23 (Springer, 1992) pp. 1–7. [12] Y. Wu and W. Q. Zhu, Phys. Rev. E 77, 1 (2008). [30] J. Howard, Mechanics of Motor Proteins and the Cy- [13] E. Daly and A. Porporato, Phys. Rev. E 75, 1 (2007). toskeleton (Sinauer Associates, 2001) p. 367. [14] P. D’Odorico, F. Laio, and L. Ridolfi, Am. Nat. 167, [31] R. J. LeVeque, Finite Difference Methods for Ordinary E79 (2006). and Partial Differential Equations (SIAM, Philadelphia, [15] M. S. Bartlett, E. Daly, J. J. McDonnell, A. J. Parolari, 2007). andA.Porporato,Proc.R.Soc.AMath.Phys.Eng.Sci. [32] C.E.MilesandJ.P.Keener, (2016),arXiv:1605.06488. 471, 20150389 (2015). [33] S. P. Ellner and M. Rees, Am. Nat. 167, 410 (2006). [16] E. Daly and A. Porporato, Phys. Rev. E 74, 041112 [34] M.R.Easterling,The integral projection model: Theory, (2006). analysis and application, Ph.D. thesis, North Carolina [17] D. R. Cox, J. R. Stat. Soc. Ser. B 17, 129 (1955). State University (1998). [18] S. D. Lawley, J. C. Mattingly, and M. C. Reed, SIAM [35] M.G.KreinandM.A.Rutman,Am.Math.Soc.Transl. J. Math. Anal. 47, 3035 (2015), arXiv:1407.2264. 26, 128 (1950). [19] F. Xi, Stoch. Process. their Appl. 119, 2198 (2009). [36] M. A. Krasnosel’skii, Positive Solutions of Operator [20] L. DeVille, S. Dhople, A. D. Dom´ınguez-Garc´ıa, and Equations (P. Noordhoff, 1964). J. Zhang, SIAM J. Appl. Dyn. Syst. 15, 526 (2016), [37] J. P. Keener, Principles of Applied Mathematics: Tran- arXiv:1411.6600. formation and Approximation (WestviewPress,2000)p. [21] B. Cloez and M. Hairer, Bernoulli 21, 505 (2015), 603.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.