ebook img

Kendall random walk, Williamson transform and the corresponding Wiener-Hopf factorization PDF

0.15 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Kendall random walk, Williamson transform and the corresponding Wiener-Hopf factorization

KENDALL RANDOM WALK, WILLIAMSON TRANSFORM AND THE CORRESPONDING WIENER-HOPF FACTORIZATION 1 2 B.H. JASIULIS-GOL DYN AND J.K. MISIEWICZ 6 1 0 2 Abstract. The paper gives some properties of hitting times and an analogue of the Wiener-Hopf factorization for the Kendall ran- c e domwalk. WeshowalsothattheWilliamsontransformisthebest D toolforproblems connectedwith the Kendallgeneralizedconvolu- tion. 9 ] 1. Introduction R P In this paper we study positive and negative excursions for the Kendall h. random walk {X : n ∈ N } which can be defined by the following n 0 at recurrence construction: m Definition 1.1. The stochastic process {X : n ∈ N } is a discrete [ n 0 time Kendall random walk with parameter α > 0 and step distribution 2 ν if there exist v 3 1. (Y ) i.i.d. random variables with distribution ν, k 7 2. (ξ ) i.i.d. random variables with uniform distribution on [0,1], 8 k 5 3. (θ ) i.i.d. random variables with symmetric Pareto distribution with k 0 density π (dy) = α|y|−2α−1111 (|y|)dy, . 2α [1,∞) 1 4. sequences (Y ), (ξ ) and (θ ) are independent 0 k k k e 5 such that 1 : X = 1, X = Y , X = M r [I(ξ < ̺ )+θ I(ξ > ̺ )], v 0 1 1 n+1 n+1 n+1 n n+1 n+1 n n+1 i X where θ and M are independent, n+1 n+1 r mα a M = max{|X |,|Y |}, m = min{|X |,|Y |}, ̺ = n+1 n+1 n n+1 n+1 n n+1 n+1 Mα n+1 and r = {sgn(u) : max{|X |,|Y |} = |u|}. n+1 n n+1 1 Institute of Mathematics, University of Wrocl aw,pl. Grunwaldzki 2/4,50-384 Wrocl aw, Poland, e-mail: [email protected] 2 Faculty of Mathematics and Information Science, Warsaw University of Technol- ogy,ul. Koszykowa75,00-662Warsaw,Poland,e-mail: [email protected] Key words and phrases: generalized convolution, Kendall convolution, Markov process, Pareto distribution, random walk, weakly stable distribution, Williamson transform, Wiener-Hopf factorization Mathematics Subject Classification. 60G50, 60J05,44A35, 60E10. 1 2 B.H. JASIULIS-GOL DYN1 AND J.K. MISIEWICZ 2 The Kendall random walk is an example of discrete time L´evy ran- dom walk under generalized convolution studied in [1]. Some basic properties of this particular process are described in [3]. In the second section we present the definition and main properties of the Williamson transform together with unexpectedly simple inverse formula. We give also the definition of the Kendall convolution ([4]) which is an example of generalized convolution in the sense defined by Urbanik [8, 9]. This section is written only for the completeness of the paper, since all presented facts are known, but rather difficult to find in the literature. Inthethirdsection werecall anotherdefinitionofKendallrandomwalk following the multidimensional distributions approach given in [1]. By this construction it is clear that the Kendall random walk is a Markov process. Then we study hitting times, positive and negative excursions for this process. The main result of the paper is the analog of the Wiener-Hopf factorization. For simplicity we use notation T for the rescaling operator (dilatation) a defined by (T λ)(A) = λ(A/a) for every Borel set A when a 6= 0, and a T λ = δ . In the whole paper parameter α > 0 is fixed. 0 0 2. Williamson transform and Kendall convolution Let us start fromreminding the Williamson transform (see [12]) and its basic properties. In this paper we apply it to P -symmetric probability s measures on R, but equivalently it can be considered on the set P - + measures on [0,∞). Definition 2.1. By Williamson transform we understand operation ν → ν given by b ν(t) = (1−|xt|α) ν(dx), ν ∈ P , ZR + s where a = a ifba > 0 and a = 0 otherwise. + + For the convenience we use the following notation: Ψ(t) = (1−|t|α) , G(t) = ν(1/t). + The next lemma is almost evident and well known. It contains the b inverse of Williamson transform which is surprisingly simple. Lemma 2.1. The correspondence between a measure ν ∈ P and its Williamson transform is 1−1. Moreover, denoting by F the cumulative distribution function of ν, ν({0}) = 0, we have 1 [α(G(t)+1)+tG′(t)] if t > 0; F(t) = 2α (cid:26) 1−F(−t) if t < 0. except for the countable many t ∈ R. KENDALL RANDOM WALK 3 Proof. Of course the assumption ν({0}) = 0 is only technical simpli- fication. Notice first, that for t > 0 integrating by parts we obtain G(t) = (1−|x/t|α) dF(x) Z + R t = 2 1−xαt−α F(x) t +αt−α xα−1F(x)dx (cid:20) 0 Z (cid:21) (cid:0) (cid:1) (cid:12) 0 t (cid:12) = −1+2αt−α xα−1F(x)dx. Z 0 Since 1 t+h t lim xα−1F(x)dx− xα−1F(x)dx = tα−1F(t+), h→0+ h (cid:20)Z0 Z0 (cid:21) and 1 t+h t lim xα−1F(x)dx− xα−1F(x)dx = tα−1F(t−) h→0− h (cid:20)Z0 Z0 (cid:21) then, except for the points of jumps for F we can differentiate the equality t 2α xα−1F(x)dx = tα(G(t)+1) Z 0 and obtain 2αtα−1F(t) = αtα−1(G(t)+1)+tαG′(t). ♣ In the following we use the notation: δ = 1δ + 1δ and π (dy) = x 2 x 2 −x 2α α|y|−2α−1111 (|y|)dy is the symmetric Pareto distribution. [1,∞) e e Definition 2.2. By the Kendall convolution we understand operation △ : P2 → P defined for discrete measures by α s s δ △ δ := T ̺απ +(1−̺α)δ x α y M 2α 1 (cid:16) (cid:17) where M = max{e|x|,|y|e}, m = min{e|x|,|y|}, ̺ = me/M. The extension of △ to the whole P is given by α s ν △ ν (A) = δ △ δ (A)ν (dx)ν (dy). 1 α 2 x α y 1 2 ZR2 e e It is easy to see that the operation △ is symmetric, associative, com- α mutative and • ν △ δ = ν for each ν ∈ P ; α 0 s • (pν +(1−p)ν ) △ ν = p(ν △ ν)+(1−p)(ν △ ν) for each 1 2 α 1 α 2 α p ∈ [0,1] and each ν,ν ,ν ∈ P . 1 2 s • if λ → λ and ν → ν then (λ △ ν ) → (λ △ ν), where → n n n α n α denotes the weak convergence; • T ν △ ν = T ν △ T ν for each ν ,ν ∈ P . a 1 α 2 a 1 α a 2 1 2 s (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) 4 B.H. JASIULIS-GOL DYN1 AND J.K. MISIEWICZ 2 The next proposition, which is only a reformulation of a known prop- erty of Williamson transform, shows that this transform is playing the same role for Kendall convolution as the Fourier transform for classical convolution. Proposition 2.2. Let ν ,ν ∈ P be probability measures with the 1 2 s Williamson transforms ν ,ν . Then 1 2 Ψ(xtb) νb △ ν (dx) = ν (t)ν (t). 1 α 2 1 2 Z R (cid:0) (cid:1) Proof. Notice first that for M = max{|xb|,|y|b}, m = min{|x|,|y|}, ̺ = m/M we have Ψ(ut) δ △ δ (du) = Ψ(Mut) δ △ δ (du) x α y ̺ α 1 Z Z R R (cid:0) (cid:1) (cid:0) (cid:1) e ∞ e αdu e e = ̺α2 Ψ(Mut) +(1−̺α)Ψ(Mt)1 (Mt) Z |u|2α+1 [−1,1] 0 = ̺α(1−|Mt|α)21 (Mt)+(1−̺α)(1−|Mt|α)1 (Mt) [−1,1] [−1,1] = (1−|Mt|α)(1−|mt|α)1 (Mt) = Ψ(xt)Ψ(yt) [−1,1] Now, by the definition of Kendall convolution, we see that Ψ(xt) ν △ ν (dx) = Ψ(ut) δ △ δ (du)ν (dx)ν (dy) 1 α 2 x α y 1 2 ZR ZR3 (cid:0) (cid:1) (cid:0) (cid:1) e e = Ψ(xt)Ψ(yt)ν (dx)ν (dy) = ν (t)ν (t), 1 2 1 2 ZR2 which was to be shown. b b ♣ Proposition 2.3. There exists a sequence of positive numbers (c ) n such that T δ△αn converges weakly to a measure σ 6= δ (here ν△αn = cn 1 0 ν △ ··· △ ν denotes the Kendall convolution of n measures ν). α α e Proof. Since theWilliamson transformof δ is equal Ψ(t) thenputting 1 c = n−1/α we obtain n e Ψ(xt)T δ△αn(dx) = Ψ(c xt)δ△αn(dx) ZR cn 1 ZR n 1 e= (1−|c t|α)n −→ee−|t|α if n → ∞. n + The function e−|t|α is symmetric and monotonically decreasing as a function of tα on the positive half line, thus there exists a measure σ ∈ P such that s e−|t|α = (1−|xt|α) σ(dx). Z + R According to Lemma 2.1 we can calculate the cumulative distribution function F of the measure σ obtaining F(0) = 1 and 2 1 1+t−α +et−α e−t−α if t > 0; F(t) = 2 ♣ (cid:26) 1(cid:0)−F(−t) (cid:1) if t < 0. KENDALL RANDOM WALK 5 Remark. Notice that properties of Kendall convolution together with the one described in the last proposition show that the Kendall con- volution is an example of generalized convolutions in the sense defined by K. Urbanik. More about generalized convolutions one can find in [2, 5, 6, 7, 8, 9, 10, 11]. As a simple consequence of Lemma 2.1 and Proposition 2.2 we obtain the following fact: Proposition 2.4. Let ν ∈ P. For each natural number n > 2 the cumulative distribution function F of the measure ν△αn is equal n 1 F (t) = α G(t)n +1 +tnG(t)n−1G′(t) , t > 0, n 2α (cid:2) (cid:0) (cid:1) (cid:3) and F(t) = 1−F(−t) for t < 0, where G(t) = ν(1/t). Examples. b 1. Let ν be uniform distribution on [−1,1]. Then α |t|−α G(t) = |t|1 (t)+ |t|− 1 (|t|), (−1,1) (1,∞) α+1 α+1 (cid:16) (cid:17) 1 1 α n n F (t) = + sgn(t) t+ tn−1111 (t) n [0,1) 2 2 α+1 α (cid:16) (cid:17) (cid:16) (cid:17) 1 t−α n−1 n t−α n + sgn(t) 1− 1+ + −1 111 (t). [1,∞) 2 α+1 α α+1 t (cid:16) (cid:17) (cid:16) (cid:16) (cid:17)(cid:17) 2. Let ν = δ := 1δ + 1δ , thus G(t) = (1 − |t|−α) . For each 1 2 1 2 −1 + natural number n > 2 the probability measure ν△αn has the density e αn(n−1) dF (t) = 1−|t|−α n−2111 (|t|)dt. n 2|t|2α+1 [1,∞) (cid:0) (cid:1) 3. Let ν = pδ +(1−p)π , where p ∈ (0,1] and π is the symmetric 1 p p Pareto distribution with density π (dy) = α|y|−p−1111 (|y|)dy. For p [1,∞) e each natural number n >e2 we have: e e 1 1 α(1−p) p(1−α) n−1 F (t) = + sgn(t) 1− |t|−p + |t|−α n 2 2 α−p α−p h i (1−p)(np−α) p(1−α)(n−1) × 1+ |t|−p − |t|−α 111 (|t|). [1,∞) α−p α−p h i 4. Let ν = π for α ∈ (0,1]. Since δ △ δ = π then using 2α 1 α 1 2α Example 2 we arrive at: e e e e αn(2n−1) dF (t) = 1−|t|−α 2(n−1)111 (|t|)dt. n |t|2α+1 [1,∞) (cid:0) (cid:1) 6 B.H. JASIULIS-GOL DYN1 AND J.K. MISIEWICZ 2 3. Kendall random walk The direct construction of the symmetric Kendall random walk {X : n n ∈ N } based on the sequence (Y ) of i.i.d. unit steps with distribu- 0 k tion ν we have already presented in Definition 1.1. Slightly different direct construction was given in [3]. We recall here one more definition of the Kendall random walk following [1], where only multidimensional distributions of this process areconsidered. This approach is more con- venient in studying positive andnegative excursions, which we consider in this section. Definition 3.1. The Kendall random walk or a discrete time L´evy process under the Kendall generalized distribution is the Markov process {X : n ∈ N } with X ≡ 0 and the transition probabilities n 0 0 P (x,A) = P X ∈ A X = x = δ △ ν△αn, n,k ∈ N, n n+k k x α (cid:8) (cid:12) (cid:9) where measure ν ∈ Ps is called(cid:12) the step distribution. For any real a we introduce the first hitting times of the half lines (a,∞) and (−∞,a) for random walk {X : n ∈ N }: n 0 τ+ = min{n > 1 : X > a}, τ− = min{n > 1 : X < a} a n a n with the convention min∅ = ∞. We want to find the joint distribution of the random vector (τ+,X ). In order to attain this we need some 0 τ+ 0 lemmas: Lemma 3.1. 1 xy α h(x,y,t) := δ △ δ (0,t) = 1− 1 x α y 2 t2 {|x|<t,|y|<t} (cid:16) (cid:12) (cid:12) (cid:17) 1 x y x (cid:12) y(cid:12) = Ψ +Ψ −Ψ Ψ(cid:12) (cid:12) 1 , {|x|<t,|y|<t} 2 t t t t h (cid:16) (cid:17) (cid:16) (cid:17) (cid:16) (cid:17) (cid:16) (cid:17)i δ △ ν(0,t) = P (x,[0,t)) x α 1 x 1 1 1 x = Ψ F(t)− + G(t)− Ψ G(t) 1 . {|x|<t} (cid:20) t 2 2 2 t (cid:21) (cid:16) (cid:17)(cid:16) (cid:17) (cid:16) (cid:17) Proof. With the notation a = min{|x|,|y|}, b = max{|x|,|y|} and ̺ = (a/b)α we have δ △ δ (0,t) = T (1−̺)δ +̺π (0,t) x α y b 1 2α (cid:16) (cid:17) 1 1e = (1−̺)1(b < t)+ ̺T π e(0,t) b 2α 2 2 1 1 t/b 2α = (1−̺)1(b < t)+ ̺1(b < t) ds 2 2 Z s2α+1 1 1 b 2α 1 ab α = 1−̺ 1(b < t) = 1− 1(b < t) 2 t 2 t2 (cid:16) (cid:16) (cid:17) (cid:17) (cid:16) (cid:16) (cid:17) (cid:17) 1 xy α = 1− 1(|x| < t,|y| < t). 2 t2 (cid:16) (cid:12) (cid:12) (cid:17) (cid:12) (cid:12) (cid:12) (cid:12) KENDALL RANDOM WALK 7 The final form trivially follows from the identity 1−ab = 1−a+1− b−(1−a)(1−b). The second formula follows from δ △ ν(0,t) = δ △ δ (0,t)ν(dy). ♣ x α x α y Z R The next lemma is crucial for further considerations. Lemma 3.2. Let {X : n ∈ N } be the Kendall random walk. Then n 0 Φ (t) := P X 6 0,...X 6 0,0 < X < t n 1 n−1 n 1(cid:8) 1 (cid:9) = G(t)n−1 2n F(t)− −(n−1)G(t) . 2n 2 h (cid:16) (cid:17) i Proof. For k = 1 we have Φ (t) = P{0 < X < t} = F(t) − 1. For 1 1 2 k = 2 0 t 0 Φ (t) = P (x ,dx )P (0,dx ) = P (x ,(0,t))ν(dx ) 2 1 1 2 1 1 1 1 1 Z Z Z −∞ 0 −∞ 1 0 1 = 2Ψ(x /t) F(t)− +G(t)−Ψ(x /t)G(t) ν(dx ) 1 1 1 2 Z−th (cid:16) 2(cid:17) i 1 1 = 4G(t) F(t)− −G(t)2 . 4 2 h (cid:16) (cid:17) i In order to calculate Φ notice first that by Proposition 2.2 3 Ψ(y/t)(δ △ ν)(dy) = Ψ(x/t)G(t) x α Z R and applying the second formula from Lemma 3.1 to the measure P (x ,·) = δ △ ν we have 1 1 x1 α 0 t 0 P (x ,dx )P (x ,dx ) = (δ △ ν)(0,t) (δ △ ν)(dx ) Z Z 1 2 3 1 1 2 Z x2 α x1 α 2 −∞ 0 −∞ 1 0 1 = 2Ψ(x /t) F(t)− +G(t)−Ψ(x /t)G(t) (δ △ ν)(dx ) 2 Z−t(cid:20) 2 (cid:16) 2(cid:17) 2 (cid:21) x1 α 2 1 1 1 = Ψ(x /t)G(t) F(t)− +G(t)δ △ ν(0,t)− Ψ(x /t)G(t)2 2 (cid:20) 1 2 x1 α 2 1 (cid:21) (cid:16) (cid:17) 1 1 = 4Ψ(x /t)G(t) F(t)− +G(t)2 −2Ψ(x /t)G(t)2 . 1 1 4 (cid:20) 2 (cid:21) (cid:16) (cid:17) 8 B.H. JASIULIS-GOL DYN1 AND J.K. MISIEWICZ 2 Consequently 0 0 t Φ (t) = P (x ,dx )P (x ,dx )P (0,dx ) 3 1 2 3 1 1 2 1 1 Z Z Z −∞ −∞ 0 1 0 1 = 4Ψ(x /t)G(t) F(t)− +G(t)2 −2Ψ(x /t)G(t)2 ν(dx ) 1 1 1 4 Z−t(cid:20) (cid:16) 2(cid:17) (cid:21) 1 1 1 = 2G(t)2 F(t)− +G(t)2 F(t)− −G(t)3 4 (cid:20) 2 2 (cid:21) (cid:16) (cid:17) (cid:16) (cid:17) 1 1 = 6G(t)2 F(t)− −2G(t)3 . 23 (cid:20) 2 (cid:21) (cid:16) (cid:17) Simple, but laborious, application of mathematical induction ends the proof. ♣ Lemma 3.3. Therandomvariableτ+ (and, bysymmetryof the Kendall 0 random walk, also variable τ−) has geometric distribution P(τ+ = k) = 0 0 1 , k = 1,2,··· it has the following moments generating function: 2k s Esτ0+ = 2 , 0 6 s < 2. 1− s 2 Proof. Notice that lim G(t) = 1 since lim Ψ(1/t) = 1. Now it t→∞ t→∞ is enough to apply Lemma 3.1: 1 P(τ+ = k) = P{X 6 0,··· ,X 6 0,X > 0} = Φ (∞) = . ♣ 0 1 k−1 k k 2k Lemma 3.4. 4F(t)−2−G(t)2 P X < t = . n τ0+ o (2−G(t))2 Proof. It is enough to apply Lemma 3.1: ∞ ∞ P X < t = P X < t,τ+ = k = Φ (t) τ+ τ+ 0 k n 0 o X n 0 o X k=1 k=1 ∞ 1 1 = G(t)n−1 2n F(t)− −(n−1)G(t) 2n 2 Xn=1 h (cid:16) (cid:17) i 1 = 4F(t)−2−G(t)2 . ♣ (2−G(t))2 h i Definition 3.2. Let {X : n ∈ N } be the Kendall random walk in- n 0 dependent of the random variable N with the geometric distribution s/2 P{N = k} = (s)k−1(1 − s), 0 6 s 6 1, for k = 1,2,.... By the s/2 2 2 geometric-Kendall random variable we understand the following ∞ Z = X 1 . s/2 k {Ns/2=k} X k=1 KENDALL RANDOM WALK 9 Notice that EΨ(X /t) = G(t)k by Proposition 2.2, thus we have k ∞ s k−1 s EΨ(Z /t) = EΨ(X /t) 1− s/2 k 2 2 X (cid:16) (cid:17) (cid:16) (cid:17) k=1 ∞ s k−1 s G(t) 1− s = G(t)k 1− = 2 . 2 2 1−(cid:0)s G(t)(cid:1) Xk=1 (cid:16) (cid:17) (cid:16) (cid:17) 2 We are ready now to prove the main result of the paper: the Wiener- HopffactorizationfortheKendallrandomwalk. ThisgivestheLaplace- Williamson transform Esτ0+Ψ(uXτ+), thus consequently, also the joint 0 distribution of (τ+,X ). 0 τ+ 0 Theorem 3.5. Let {X : n ∈ N } be a random walk under the Kendall n 0 convolution with unit step X ∼ F such that F(0) = 1. Then 1 2 s (1− s)G(1/u) Esτ0+Ψ(cid:16)uXτ0+(cid:17) = 1−2 s · 1− 2sG(1/u) = Esτ0+EΨ uZs/2 . 2 2 (cid:0) (cid:1) Proof. Let H(s,u) := Esτ0+Ψ uXτ+ . Then (cid:16) 0 (cid:17) ∞ ∞ ∞ ∞ H(s,u) = sk Ψ(ux)dΦ (x) = Ψ(ux)d skΦ (x) k k Z Z (cid:18) (cid:19) Xk=1 0 Xk=1 x 0 ∞ ∞ sG(x) k 1 1 = Ψ(ux)d 2k F(x)− −(k −1)G(x) Z (cid:18)Xk=1h 2 iG(x) (cid:20) h 2i (cid:21)(cid:19)x 0 ∞ s(F(x)− 1)− 1s2G(x)2 = Ψ(ux)d 2 4 Z0 (cid:20) 1− 1sG(x) 2 (cid:21)x 2 1/u s(F(cid:0)(x)− 1)− 1(cid:1)s2G(x)2 = αuα xα−1 2 4 dx. Z0 1− 1sG(x) 2 2 (cid:0) (cid:1) In these calculations we write d(f(s,x)) for the differential of the x function f(s,x) with respect to x. To get the last expression we need to know that G(0) = 0. Now we calculate G′(x). Since z x z α G(x) = Ψ ν(dz) = 2 1− ν(dz), ZR (cid:16)x(cid:17) Z0 (cid:16) (cid:12)x(cid:12) (cid:17) (cid:12) (cid:12) then integrating by parts we obtain (cid:12) (cid:12) x (G(x)+1)xα = 2α zα−1F(z)dz. Z 0 Now it is enough to differentiate both sides of this equality to obtain 1 1 G′(x) = 2αx−1 F(x)− − G(x) . (cid:18) 2 2 (cid:19) 10 B.H. JASIULIS-GOL DYN1 AND J.K. MISIEWICZ 2 We see that 1/u xαsG(x) ′ sG(1/u) H(s,u) = uα 2 dx = 2 . ♣ Z (cid:20)1− sG(x)(cid:21) 1− sG(1/u) 0 2 2 The same method as in the proof of the last theorem we are using in the next lemma. Lemma 3.6. EXα = 2E|Y |α τ+ 1 0 Proof. In the following calculations we take R > 0: ∞ F(x)− 1 − 1G(x)2 EXα = xαd 2 4 τ0+ Z (cid:18) (1− 1G(x))2 (cid:19) 0 2 x F(x)− 1 − 1G(x)2 R R xα1G(x) ′ = lim xα 2 4 − 2 dx R→∞(cid:20) (1− 21G(x))2 (cid:12)0 Z0 h(1− 21G(x))i (cid:21) (cid:12) F(x)− 1 − 1G(x) R(cid:12) = lim xα 2 2 R→∞(cid:20) (1− 1G(x))2 (cid:12)0(cid:21) 2 (cid:12) xα x (cid:12) x u α R = lim ν(du)− 1− ν(du) R→∞(cid:20)(1− 21G(x))2hZ0 Z0 h (cid:16)x(cid:17) i i(cid:21)0 1 x R ∞ = lim uαν(du) = 4 uαν(du), R→∞(cid:20)(1− 1G(x))2 Z (cid:21) Z 2 0 0 0 which was to be shown. Notice that the assumption E|Y |α < ∞ is 1 irrelevant in these calculations. ♣ References [1] Borowiecka-Olszewska, M., Jasiulis-Go ldyn, B.H., Misiewicz, J.K., Rosin´ski, J. (2014). L´evy processes and stochastic integrals in the sense of generalized convolutions, accepted for publication in Bernoulli, http://arxiv.org/abs/1312.4083. [2] Jasiulis, B.H. (2010). Limit property for regular and weak generalized con- volutions. Journ. of Theor. Probab., 23(1), 315–327. [3] Jasiulis-Go ldyn, B.H. (2014). Kendall random walks, submitted, http://arxiv.org/abs/1412.0220. [4] Jasiulis-Go ldyn, B.H., Misiewicz, J.K. (2011). On the uniqueness of the Kendall generalized convolution. Journ. of Theor. Probab., 24(3), 746–755. [5] Jasiulis-Go ldyn, B.H., Misiewicz, J.K. (2014). Weak L´evy-Khintchine representationforweakinfinite divisibility, acceptedforpublicationinTheory of Probabability and Its Applications, http://arxiv.org/abs/1407.4097. [6] Kucharczak, J., Urbanik, K. (1986). Transformations preserving weak stability. Bull. Polish Acad. Sci. Math. 34(7-8), 475-486. [7] Misiewicz, J.K. (2006). Weak stability and generalizedweak convolutionfor randomvectorsandstochasticprocesses,IMSLectureNotes-MonoghaphSeries Dynamics & Stochastics. 48,109–118. [8] Urbanik,K.(1993).Anti-irreducibleprobabilitymeasures.Probab.andMath. Statist. 14(1), 89–113.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.