ebook img

The Local Time of the Classical Risk Process PDF

0.18 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Local Time of the Classical Risk Process

8 0 The Local Time of the Classical Risk Process∗ 0 2 F. Cortes, J.A. Leo´n, and J. Villa n a J Abstract. In this paper we give an explicit expression for the local 4 time of the classical risk process and associate it with the density of 1 an occupational measure. To do so, we approximate the local time by a suitable sequence of absolutely continuous random fields. Also, as ] R an application, we analyze the mean of the times s ∈ [0,T] such that P 0≤Xs ≤Xs+ε for some given ε>0. . h t a m 1. Introduction and main results [ Henceforth, X = {Xt,t ≥ 0} represents the classical risk process. More 1 precisely, v 6 Nt 0 X = x +ct− R , t ≥ 0, t 0 k 1 k=1 2 X where x ≥ 0 is the initial capital, c > 0 is the premium income per unit of . 0 1 time, N = {N ,t ≥ 0} is an homogeneous Poisson process with rate α and 0 t 8 {Rk,k ∈N} is a sequence of i.i.d non-negative random variables, which is 0 independent of N. N is interpreted as the number of claims arrivals during t : v time t and Rk as the amount of the k-th claim. We suppose that R1 has i finite mean and it is an absolutely continuous random variable with respect X to the Lebesgue measure. r a The risk process has been studied extensively because it is often used to describe the capital of an insurance company. Indeed, among the properties of X considered by several authors, we can metion that the local time of X has been analyzed by Kolkovska et al. [7], the double Laplace transform of an occupation measure of X has been obtained by Chiu and Yin [3], or that the probability of ruin has been one of the most important goals of 2000 Mathematics Subject Classification. 60J55, 91B30. Keywordsandphrases. Classicalriskprocess,crossingprocess,localtime,occupation measure, Tanaka-like formula. ∗ Partially supported bytheCONACyTgrant 45684-F, and bytheUAAgrants PIM 05-3 and PIM 08-2. 1 2 F.CORTES,J.A.LEO´N,ANDJ.VILLA the risk theory (see, for example, Asmussen [1], Grandell [8], Rolski et al. [11] and the references therein to get an idea of the analysis realized in this subject). In this paper we are interested in continuing the development of the local time L of X and its applications as an occupational density in order to improve the understanding of X. NotethatX isaL´evyprocessdueto N R beingacompoundPoisson k=1 k process. Thus, we can apply different criteria for general L´evy processes to P guarantee theexistence of L. For example, wecan usetheHawkes’ result[9] when R is exponential distributed (see also [2] and references therein for k related works). However, we cannot obtain in general the form of L via this results. Moreover, in the literature there exist different characterizations of thelocaltime(seeFitzsimmonsandPort[6]andthereferencestherein). For instance, the local time have been introduced in [6] (resp. [7]) as an L2(Ω)- derivative (resp. derivative in probability) of some occupation measure. Nevertheless, in [6, 7], it is not analyzed some properties of the involved local time using this “approximation of L”. The purpose of this paper is to associate the local time of X with the crossing process when L is interpreted as a density of the occupational mea- sure (see Theorem 1.c) below). The relation between the local time and the crossing process was conjectured by L´evy [10] for the Brownian motion case (i.e., whe X is a Wiener process). In this article we use the ideas of the proof of Tanaka’s formula for the Brownian motion (see Chung [4], Chapter 7)toobtain asequenceofabsolutely continuous randomfields(intime)that converges with probability 1 (w.p.1 for short) to 1 1 1 L (x) = ( 1 (X )+1 (X )− 1 (x )−1 (x ) t {x} t (x,∞) t {x} 0 (x,∞) 0 c 2 2 (1.1) − {1 (X )−1 (X )}), t ≥ 0 and x∈ R. (x,∞) s (x,∞) s− 0<s≤t X Thisapproximation allows us to prove that thisL is thedensity of the occu- pation measure (see (1.3) below) and, therefore, to deal with some problems related to occupations measures. Notice that L given by (1.1) is well-defined because X is c`adla`g and (1.2) P(N < +∞, for all t > 0) = 1, t wichimplythatonly afinitenumberofsummandsin(1.1)aredifferentthan zero. In the following result we not only relate L to the number of crossings with certain level, but also to the occupation measure t (1.3) Y (A) = 1 (X )ds, t ≥ 0 and A∈ B(R), t A s Z0 where B(R) is the Borel σ-algrebra of R. Toward this end, we need the following: THE LOCAL TIME OF THE CLASSICAL RISK PROCESS 3 Definition 1. We say that there exists a crossing with the level x ∈ R at time s ∈ (0,+∞) if for all open interval I such that s ∈ I, x is an interior point of {X : t ∈ I}. That is, x ∈ (X(I))◦. Moreover, the number t of crossings with the level x in the interval (0,t) is denoted by C (x). C is t known as the crossing process of X. Observe that if x∈ R is a crossing point at time s, then X is continuous at time s and X = x. s Now we can state the main result of the paper. Theorem 1. Let t > 0 and x∈ R. Then, the random field L defined in (1.1) has the following properties: a) L (x) ≥ 0 and L(x) is not decreasing w.p.1. t · b) L (x) = 1 11 (x)− 11 (x)+C (x) w.p.1. t c 2 {Xt} 2 {X0} t c) For every bounded and Borel measurable function g : R→R, we (cid:0) (cid:1) have t (1.4) g(X )ds = g(y)L (y)dy w.p.1. s t Z0 ZR Note that Statement b) implies that the number of crossings C of X introduced in Definition 1 satisfies C (x) = 1 (x)−1 (x)+ 1 (x) w.p.1, t (−∞,Xt) (−∞,X0) (Xs,Xs−) 0<s≤t X for t > 0 and x ∈ R. Also note that, from (1.4) and Statement a), the random field L can be interpreted as an occupation density relative to the Lebesgue measure on R. Hence, L in (1.1) is called the local time and the expression 1 1 1 L (x) = ( 1 (x)− 1 (x)+1 (x)−1 (x) t c 2 {Xt} 2 {X0} (−∞,Xt) (−∞,X0) + f(x,X )dX ), s s Z(0,t] is known as Tanaka-like formula for L (x). Here t 1(Xs,Xs−)(x), ∆X 6= 0, f(x,Xs) = ∆Xs s ( 0, ∆Xs = 0. On the other hand, relation (1.4) can be extended to some occupational results. Indeed, as an example, we can state the following, which leads us to get some average of the pathwise behavior of X. Theorem 2. Let g : R×R −→ R be a bounded and Borel measurable function. Then for each ε > 0, t (1.5) E[ g(X ,X −X )ds]= E[g(x,X −x )]E[L (x)]dx. s s+ε s ε 0 t Z0 ZR 4 F.CORTES,J.A.LEO´N,ANDJ.VILLA An application of this theorem is to answer the question: What is the average in time that the capital of an insurance company is positive, and bigger than itself after twelve months?. The paper is organized as follows. In Section 2 we provide the tool needed to prove Theorem 1. In particular, we approximate the local time by a sequence of suitable random fields. The proof of Theorem 1 is given in Section 3. Finally, in Section 4, we show Theorem 2 and answer the above question in the case that the claim R has exponential distribution. 1 2. Main tool Inthissection weprovidetheneededtooltoshowthatTheorem1holds. In particular, we construct the announced sequence converging to the local time L. In the remaining of this paper, T denotes the i-th jump time of N, with i T = 0. It is known that T has gamma distribution with parameters (i,α), 0 i i≥ 1. We will use the following technical resul in the proofs of this section. Lemma 1. Let x∈ R, s > 0, Ω (s)= {∆X 6= 0}, and 1 s Ω = {X = x, ∆X 6= 0 for some s> 0} 2 s− s ∪{X = x, ∆X 6= 0 for some s > 0}. s s Then, P(Ω (s)) = 0 and P(Ω )= 0. 1 2 Proof. By the law of total probability ∞ P(Ω (s)) = P(N = k)P(Ω (s)|N =k). 1 s 1 s k=0 X Notice that P(Ω (s)|N = k)= P(∆X 6= 0|N =k) = P(T = s)= 0. 1 s s s k On the other hand, let ν ∈N and define Ω˜ = {X = x, ∆X 6= 0 for some 0 < s < ν} ν s− s ∪{X = x, ∆X 6= 0 for some 0< s < ν}. s s For k = 0, P(Ω˜ |N = 0) = P(∅|N = 0) = 0, ν ν ν THE LOCAL TIME OF THE CLASSICAL RISK PROCESS 5 and for k ≥ 1, P(Ω˜ |N = k) ≤ P(X = x for some j ∈ {1,...,k}|N = k) ν ν Tj− ν +P(X = x for some j ∈ {1,...,k}|N = k) Tj ν k ≤ (P(X = x|N =k)+P(X = x|N = k)). Tj− ν Tj ν j=1 X For j = 1 we get P(X = x|N = k) = P(T = (x−x )c−1|N = k) = 0, T1− ν 1 0 ν P(X = x|N = k) = P(cT −R =x−x |N = k)= 0, T1 ν 1 1 0 ν thisisbecauseT andR areindependentandabsolutelycontinuousrandom 1 1 variables. Let P(·|N = k) = P∗(·). When j > 1 we have ν P∗(X = x) = P∗(X = x|X = y)P∗(X ∈ dy) Tj− Tj− Tj−1− Tj−1− R Z = P∗(R = y−(x−(T −T )c))P∗(X ∈dy) j−1 j j−1 Tj−1− R Z = 0 and P∗(X = x) = P∗(X = x|X = y)P∗(X ∈ dy) Tj Tj Tj−1 Tj−1 R Z = P∗((T −T )c−R = x−y)P∗(X ∈ dy) j j−1 j Tj−1 R Z = 0. Here we have used the fact that R has an absolutely continuous distri- j−1 bution. Finally notice that P(Ω ) ≤ ∞ P(Ω˜ ) = 0. (cid:3) 2 ν=1 ν 2.1. An approximating sequePnce of the local time. Now we ap- proximate the local time L by a sequence of suitable random fields, which allows us to see that Theorem 1.a) is true. Toward this end, let x ∈ R arbitrary and fixed. For each n ∈ N define ϕ :R → R by x,n 0, y < x−1/n, ϕ (y) = (n(y−x)+1)/2, x−1/n ≤ y ≤ x+1/n, x,n  1, x+1/n < y.  Notice that  0, y < x, lim ϕ (y) = 1/2, y = x, x,n n→∞  1, y > x,  1 (2.1) = 1 (y)+1 (y), {x} (x,+∞) 2 6 F.CORTES,J.A.LEO´N,ANDJ.VILLA and 0, y < x−1/n, ϕ′ (y) = n/2, x−1/n < y < x+1/n, x,n  0, x+1/n < y.  For each n ∈ N, we define the random field  1 Ln(x) = (ϕ (X )−ϕ (X ) t c x,n t x,n 0 − {ϕ (X )−ϕ (X )−ϕ′ (X )∆X }), x,n s x,n s− x,n s− s s≤t X where∆X = X −X . Asin(1.1),wehaveby(1.2)thatLn iswell-defined. t t t− Before proving that {Ln,n ∈ N} is the sequence that we are looking for, we need to approximate the fuction ϕ by a sequence of smooth functions. x,n To do so, set Ω′ = ({X = x±1/m 6= X , for some s > 0, m ∈ N} s− s ∪{N <+∞, for all s > 0}c∪Ω )c. s 2 Since, by Lemma 1, P(X = x±1/m 6= X , for some s > 0, m ∈ N) s− s ∞ ≤ P(X = x±1/m 6= X , for some s > 0) = 0, s− s m=1 X we have P(Ω′) = 1. Let ψ : R → R a symmetric function in C∞(R) with compact support on [−1,1] and 1 ψ(y)dy = 1. Z−1 Define the sequence (ψ ) by m ψ (y) = mψ(my), y ∈ R, m and ϕm (y)= (ψ ∗ϕ )(y) = ϕ (y−z)ψ (z)dz. x,n m x,n x,n m R Z Since ψ ∈ C∞(R), then ϕm ∈ C∞(R) and moreover m x,n (2.2) ϕm converges uniformly on compacts to ϕ , x,n m x,n (2.3) ((cid:0)(ϕmx,n(cid:1))′)m converges pointwise, except on x±1/n, to (ϕx,n)′. Let us use the notation 1 Ln,m(x) = (ϕm )′(X )dX . t c x,n s− s Z(0,t] THE LOCAL TIME OF THE CLASSICAL RISK PROCESS 7 Then, by the change of variable theorem for the Lebesgue-Stieltjes integral, we have cLn,m(x) = ϕm (X )−ϕm (x ) t x,n t x,n 0 (2.4) − ϕm (X )−ϕm (X )−(ϕm )′(X )∆X . x,n s x,n s− x,n s− s 0<s≤t X (cid:8) (cid:9) Now we can give the relation between Ln and {Ln,m,m ∈ N}. Proposition 1. Let n∈ N. Then, lim Ln,m(x) = Ln(x), for all t > 0, w.p.1. t t m→∞ Proof. For ω ∈ Ω′ we have that (1.2) and (2.2) imply lim (ϕm (X )−ϕm (x )− ϕm (X )−ϕm (X ) ) x,n t x,n 0 x,n s x,n s− m→∞ 0<s≤t X (cid:8) (cid:9) = ϕ (X )−ϕ (x )− {ϕ (X )−ϕ (X )}. x,n t x,n 0 x,n s x,n s− 0<s≤t X n,m Now weanalyze theremainingterm inthedefinition ofL (x). Notice that t for each w ∈ Ω′ we have X (w) 6= x±1/k, k ∈ N. s− Therefore, from (2.3), lim (ϕm )′(X )∆X = (ϕ )′(X )∆X . x,n s− s x,n s− s m→∞ 0<s≤t 0<s≤t X X From this and (2.4) the result follows. (cid:3) Now we are ready to state the properties of {Ln,n ∈ N} that we use in Section 3. Proposition 2. The sequence {Ln,n ∈ N} fulfill: a) Ln(x) = tϕ′ (X )ds+ 1 ϕ′ (X )∆X , for all n ∈ N t 0 x,n s− c 0<s≤t x,n s− s and t > 0, w.p.1. R P b) lim Ln(x)= L (x), for all t > 0, w.p.1. n→∞ t t 8 F.CORTES,J.A.LEO´N,ANDJ.VILLA Proof. We first deal with Statement a). Fix t ≥ 0 and let ω ∈ Ω′ ∩ Ω (t)c. Then there is k ∈ N such that N (ω) = k. Thus 1 t t (ϕm )′(X )dX x,n s− s Z0 k = (ϕm )′(X )dX + (ϕm )′(X )dX x,n s− s x,n s− s i=1Z(Ti−1,Ti] Z(Tk,t] X k = ϕm ′(X )cds+ ϕm ′(X )cds x,n s− x,n s i=1Z(Ti−1,Ti] Z[Tk,t) X (cid:0) (cid:1) (cid:0) (cid:1) k + ϕm ′(X )(X −X ) x,n Ti− Ti Ti− i=1 X(cid:0) (cid:1) t k = c (ϕm )′(X )ds+ ϕm ′(X )(X −X ). x,n s− x,n Ti− Ti Ti− Z0 i=1 X(cid:0) (cid:1) Notice that on each (T ,T ] and (T ,t], there is at most one s such that i−1 i k X = x±1/n. Hence, by (2.3), we have s− lim (ϕm )′(X ) = (ϕ )′(X ), λ-a.s. x,n s− x,n s− m→∞ Therefore, by the dominated convergence theorem, we deduce t t lim (ϕm )′(X )dX = c (ϕ )′(X )ds x,n s− s x,n s− m→∞ Z0 Z0 k + (ϕ )′(X )∆X , a.s. x,n Ti− Ti i=1 X Consequently, the fact that Ln(x) and the right-hand side of last equality t are c`adla`g processes implies that Statement a) holds. Now we consider Statement b) in order to finish the proof of the propo- sition. Let ω ∈ Ω′ and t ≥ 0. Then there exist k ∈ N such that N (ω) = k. t Hence k lim (ϕ (X (ω))−ϕ (x )− {ϕ (X (ω))−ϕ (X (ω))}) n→∞ x,n t x,n 0 x,n Ti x,n Ti− i=1 X 1 1 = 1 (X (ω))+1 (X (ω))− 1 (x )−1 (x ) 2 {x} t (x,∞) t 2 {x} 0 (x,∞) 0 k 1 − { 1 (X (ω))+1 (X (ω)) 2 {x} Ti (x,∞) Ti i=1 X 1 − 1 (X (ω))−1 (X (ω))}. 2 {x} Ti− (x,∞) Ti− THE LOCAL TIME OF THE CLASSICAL RISK PROCESS 9 On the other hand ω ∈{X = x6= X , for some s > 0}c s− s implies X (w) 6= x, i= 0,1,...,k. Ti− Here there exist a finite number of indexes i such that X (w) ≤ x < X (w). Ti Ti− For large enough n we have X (ω)∈/ (x−1/n,x+1/n), i= 0,1,...,k. Ti− Therefore lim (ϕ )′(X (ω))∆X (ω) x,n s− s n→∞ 0<s≤t X k = lim (ϕ )′(X (ω))∆X (ω) n→∞ x,n Ti− Ti i=1 X k n = lim 1 (X (ω))∆X (ω)= 0. n→∞ 2 (x−1/n,x+1/n) Ti− Ti i=1 X Hence the proof is complete. (cid:3) 3. Proof of Theorem 1 Thepurposeof this section is to give theproofof Theorem1. Thisproof will be divided into three steps. It is worth mentioning that the proof of Statementa)gives usasequenceofabsolutely continuousrandomfieldsthat converges to L. Namely, the sequence { tϕ′ (X )ds,n ∈ N}. 0 x,n s− R 3.1. Proof of part a) of Theorem 1. From part a) and b) of Propo- sition 2 we have t L (x) = lim Ln(x)= lim ϕ′ (X )ds. t t x,n s− n→∞ n→∞ Z0 which yields that (Ln(x)) is non-negative and increasing. · 3.2. Proof of part b) of Theorem 1. Since X ≤ X we have s s− 1 1 1 L (x) = ( 1 (x)− 1 (x)+1 (x)−1 (x) t c 2 {Xt} 2 {x0} (−∞,Xt) (−∞,x0) (3.1) + 1 (x)). (Xs,Xs−) 0<s≤t X 10 F.CORTES,J.A.LEO´N,ANDJ.VILLA Suppose for example, x > x, X < x and C (x) = n. Let c ,...,c the 0 t t 1 n crossing times with the level x. Then, by hypothesis, there exist jumping times s ∈ (0,c ),...,s ∈ (c ,t) such that x∈ (X ,X ). Hence 1 1 n+1 n si si− 1 (x)−1 (x)+ 1 (x) = 0−1+(n+1) (−∞,Xt) (−∞,x0] (Xs,Xs−) 0<s≤t X = C (x). t 3.3. Proof of part c) of Theorem 1. For each a,b∈ R define 1 , if a≤ b, 1 = (a,b] hha,bii −1 , if b< a, (b,a] (cid:26) = 1 −1 . (−∞,b] (−∞,a] From this definition immediately follows that (3.2) 1 = 1 −1 , a,b ≤ c. hha,bii (a,c] (b,c] Using induction on n, we can prove that, for a ,...,a real numbers, 1 n (3.3) 1 +···+1 = 1 . hha1,a2ii hhan−1,anii hha1,anii On the other hand, for almost all ω ∈ Ω′, there exists k ∈ N∪{0} such that ω ∈ {N = k}. Therefore, t t k g(X )ds = g(X )ds+ g(X )ds s s s Z0 i=1Z(Ti−1,Ti] Z(Tk,t] X k = g(X +c(s−T ))ds Ti−1 i−1 i=1Z(Ti−1,Ti] X + g(X +c(s−T ))ds. Tk k Z(Tk,t] Taking x= X +(s−T )c, we can write Ti−1 i−1 t k dx g(X )ds = g(x) s c Z0 Xi=1Z(XTi−1,XTi−1+c(Ti−Ti−1)] dx + g(x) c Z(XTk,XTk+c(t−Tk)] k dx dx = g(x) + g(x) c c Xi=1Z(XTi−1,XTi−] Z(XTk,Xt] k 1 = g(x) 1 (x)dx c R (XTi−1,XTi−] Z i=1 X 1 + g(x)1 (x)dx. c R (XTk,Xt] Z

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.