ebook img

Estimation of ordinal pattern probabilities in fractional Brownian motion PDF

0.36 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Estimation of ordinal pattern probabilities in fractional Brownian motion

Estimation of ordinal pattern probabilities in fractional Brownian motion Mathieu Sinn, Karsten Keller Institute of Mathematics, University of Lu¨beck 8 0 0 2 February 2, 2008 n a J Abstract 0 1 For equidistant discretizations of fractional Brownian motion (fBm), the prob- R] abilities of ordinal patterns of order d = 2 are monotonically related to the Hurst P parameter H. By plugging the sample relative frequency of those patterns indicat- . ingchanges between upanddowninto themonotonic relation toH, one obtains the h t Zero Crossing (ZC) estimator of the Hurst parameter which has found considerable a m attention in mathematical and applied research. Inthispaper,wegenerally discusstheestimation ofordinalpatternprobabilities [ in fBm. As it turns out, according to the sufficiency principle, for ordinal patterns 1 of order d= 2 any reasonable estimator is an affine functional of the sample relative v 8 frequency of changes. We establish strong consistency of the estimators and show 9 themtobeasymptoticallynormalforH < 3. Further,wederiveconfidenceintervals 5 4 for the Hurst parameter. Simulation studies show that the ZC estimator has larger 1 . variance but less bias than the HEAF estimator of the Hurst parameter. 1 0 8 Keywords: Ordinal pattern, fractional Brownian motion, estimation, Hurst parameter. 0 : v i X 1 Introduction r a Probabilities of ordinal patterns in equidistant discretizations of fractional Brownian mo- tion (fBm) have first been analyzed by Bandt and Shiha [3]. Ordinal patterns represent the rank order of successive equidistant values of a discrete time series. For example, for three successive values there are six different possible outcomes of the rank order, which we call ordinal patterns of order d = 2. As Bandt and Shiha have shown, for equidistant discretizations of fBm the distribution of ordinal patterns is stationary and does not depend on the particular sampling interval length. Further, the probabilities of ordinal patterns of order d = 2 are all monotoni- cally related to the Hurst parameter, with the probability of ordinal patterns indicating changes from up to down and from down to up, respectively, being strictly monotonically decreasing in H. The estimator of the Hurst parameter obtained by plugging the sample relative frequency of changes between up and down into the monotonic functional relation to H has been 1 knownforsometime,runningunderthelabel‘ZeroCrossing’(ZC)estimatorsincechanges between up and down correspond to zero crossings of the first order differences. Note that more generally, but not focussing on fBm, Kedem [11] has considered the estimation of parts of the autocorrelation structure of a stationary Gaussian process by counting (higher order) zero crossings, that is, zero crossings of the first (or higher) order differences. The ZC estimator is asymptotically normally distributed for H < 3, a result essentially 4 due to Ho and Sun [10], who proved that the sample relative frequency of changes is asymptotically normally distributed in this case. Coeurjolly [6] has resumed properties of the ZC estimator including strong consistency. In the sequel, the applicability of the ZC estimator has been examined by Markovi´c and Koch [17] and Shi et al. [20], including simulation studies as well as the application to hydrological and meteorological data. The distribution of ordinal patterns is the base of ordinal time series analysis, being a new fast, robust and flexible approach to the investigation of large and complex time series (see Bandt [2] and Keller et al. [13, 12]). From the viewpoint of ordinal time series analysis, the estimation of ordinalpatternprobabilities instochastic processes isof special interest. This paper is structured as follows: Sec. 2 is devoted to a general discussion of the estimation of ordinal pattern probabilities in fBm where we consider estimators based on counting the occurence of ordinal patterns in realizations. Obviously, the sample relative frequency ofordinalpatternsisanunbiased estimate oftheprobabilityofthepatterns. As Theorem4shows, averagingthesamplerelativefrequenciesofordinalpatternsandoftheir ‘time’ and ‘spatial’ reversals, one obtains estimates which are strictly more concentrated in convex order. Hence, we restrict our following considerations to such ‘reasonable’ estimators. Notice that Theorem 4 has important consequences for the estimation of functionals of ordinal pattern distributions such as the permutation entropy (see [2], [12]). For ordinal patterns of order d = 2 one obtains two different estimators which can both be expressed as affine functionals of the sample relative frequency of changes. By Lemma 5 we establish strong consistency and asymptotical unbiasedness for the estimators of bounded continuous functionals of ordinal pattern probabilities. Theorem 8 states that estimators of ordinal pattern probabilities are asymptotically normal for H < 3. 4 In Sec. 3, we consider the estimation of the probability of a change by the sample rel- ative frequency of changes in some detail. We give a formula for the precise numerical evaluation of the variance of the sample relative frequency of changes as well as asymp- totically equivalent expressions. Based on these results, confidence intervals for the Hurst parameter are provided in Sec. 4. In Sec. 5, we compare the previous results to findings for simulations of fBm. As it turns out, the confidence intervals obtained from the ZC estimator in Sec. 4 cover the true unkown value of the Hurst parameter for about 95 per cent of the cases, even for small sample sizes and for the values of the Hurst parameter larger than 3, where asymptotic 4 normalityoftheZCestimator isnotnecessarily expected tohold. ComparedtotheHEAF estimator which estimates the Hurst parameter by plugging the sample autocovariance into a monotonic functional relation to H, the ZC estimator has larger variance but much less bias, in particular for small sample sizes. Notice that the results given in this work for the increments of fBm similarly apply to 2 FARIMA(0,d,0) processes. Furthermore, thestatistical properties oftheestimates ofordi- nal pattern probabilities do not only apply to fBm, but also to montonic transformations of fBm. In particular, the estimates of ordinal pattern probabilities are invariant with respect to (unknown) non-linear monotonic transformations of processes. 2 Estimating ordinal pattern probabilities 2.1 General aspects Let (Ω, ) be a measurable space equipped with a family (P ) of probability measures ϑ ϑ∈Θ A with Θ non-empty. The subscript ϑ indicates that a corresponding quantity (E , Var , ϑ ϑ etc.) is taken with respect to P . For the special case of fractional Brownian motion ϑ where Θ = ]0,1], we write H instead of ϑ. Let (Xk)k∈N0 be a given real-valued stochastic process defined on (Ω,A). Define the process of increments (Yk)k∈N by Yk := Xk Xk−1 for k N. For d N let Sd denote the − ∈ ∈ set of permutations of 0,1,...,d . { } Definition 1. For d N let the mapping π : Rd+1 S be defined by d ∈ → 0 1 2 ... d π((x ,x ,...,x )) = =: (r ,r ,...,r ) 0 1 d r r r ... r 0 1 d 0 1 2 d (cid:18) (cid:19) for (x ,x ,...,x ) Rd+1 with the permutation (r ,r ,...,r ) of 0,1,...,d satisfying 0 1 d 0 1 d ∈ { } x x ... x , and r > r if x = x for l = 1,2,...,d. For k N d−r0 ≥ d−r1 ≥ ≥ d−rd l−1 l d−rl−1 d−rl ∈ 0 we call Π (k) := π((X ,X ,...,X )) d k k+1 k+d the (random) ordinal pattern of order d at time k. (cid:3) The permutation π((x ,x ,...,x )) describes the rank order of the values x ,x ,...,x , 0 1 d 0 1 d where in case x = x for 0 i < j d the ‘earlier’ x is ranked higher than x . d−i d−j d−j d−i ≤ ≤ Notice that if (Xk)k∈N0 is pairwise distinct, that is, Xi 6= Xj Pϑ-a.s. for all i,j ∈ N0 with i = j and ϑ Θ, then, apart from sets with probability zero for all ϑ Θ, Π (k) d 6 ∈ ∈ generates the same σ-algebra as the rank vector (R ,R ,...,R ) of (X ,X ,...,X ) 0 1 d k k+1 k+d given by R = d 1 for j = 0,1,...,d (see [15], p. 286). j i=0 {Xk+j≥Xk+i} P P Stationarity. For ϑ Θ let =ϑ denote equality in distribution with respect to P . A ϑ ∈ stochastic process (Z ) defined on (Ω, ) for T = N or T = N := N 0 is called k k∈T 0 A ∪ { } stationary iff with respect to each ϑ Θ ∈ P Z ,Z ,...,Z =ϑ Z ,Z ,...,Z k1 k2 kn k1+l k2+l kn+l for all k ,k ,...,k T(cid:0)with n N and(cid:1)for all(cid:0)l N. (cid:1) 1 2 n ∈ ∈ ∈ In fact, Π (k) only depends on (Y ,Y ,...,Y ) for all d N and k N . In d k+1 k+2 k+d 0 ∈ ∈ particular, let π((y ,y ,...,y )) = (r ,r ,...,r ) 1 2 d 0 1 d e 3 be the unique permutation of 0,1,...,d for (y ,y ,...,y ) Rd such that 1 2 d { } ∈ d−r0 d−r1 d−rd−1 d−rd y y ... y y , (1) j j j j ≥ ≥ ≥ ≥ j=1 j=1 j=1 j=1 X X X X and r > r if d−rl−1 y = d−rly for l = 1,2,...,d. Obviously, l−1 l j=1 j j=1 j P P π((x ,x ,...,x )) = π((x x ,x x ,...,x x )) 0 1 d 1 0 2 1 d d−1 − − − for all (x ,x ,...,x ) Rd+1, and hence Π (k) = π((Y ,Y ,...,Y )) for all k N . 0 1 d ∈ e d k+1 k+2 k+d ∈ 0 This immediately yields the following statement. e Corollary 2. If (Yk)k∈N is stationary then Πd(k) k∈N0 is stationary. (cid:0) (cid:1) Space and time symmetry. For d N let the mappings α,β from S onto S be d d ∈ defined by α(r) := (r ,r ,...,r ), β(r) := (d r ,d r ,...,d r ) (2) d d−1 0 0 1 d − − − for r = (r ,r ,...,r ) S . Geometrically, α(r) and β(r) can be seen as the spatial and 0 1 d d ∈ time reversal of r, respectively (see Figure 1). Let the set r be defined by r := r,α(r),β(r),β α(r) (3) { ◦ } with denoting the usual composition of functions. As α β(r) = β α(r) and α α(r) = ◦ ◦ ◦ ◦ β β(r) = r, one has α(r) = β(r) = r. Clearly, if s r for r,s S , then s = r. This d ◦ ∈ ∈ provides a division of each S into classes, consisting of 2 or 4 elements. For d = 1 the d only class is S = (0,1),(1,0) , for d = 2 there are the two classes (0,1,2),(2,1,0) d { } { } and (0,2,1),(2,0,1),(1,2,0),(1,0,2) , and for d = 3 there are 8 classes. Note that for { } d 3 classes of both 2 and 4 elements are possible. ≥ r = (0,2,1) α(r) = (1,2,0) β(r) = (2,0,1) β α(r) = (1,0,2) ◦ Figure 1: The ordinal pattern r = (0,2,1), its spatial reversal α(r), its time reversal β(r), and its spatial and time reversal β α(r). ◦ For d N and n N let (S )n denote the n-fold Cartesian product of S . Define the d d ∈ ∈ mappings A,B from (S )n onto (S )n by d d A (r(1),r(2),...,r(n)) := α(r(1)),α(r(2)),...,α(r(n)) , B (r(1),r(2),...,r(n)) := β(r(n)),β(r(n−1)),...,β(r(1)) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) for (r(1),r(2),...,r(n)) (S )n. Further, let id denote the identity map on (S )n. Note d d ∈ that id,A,B,B A together with forms an Abelian group. ◦ ◦ (cid:8) (cid:9) 4 We say that (Yk)k∈N is symmetric in space and time iff with respect to each ϑ Θ ∈ P Y ,Y ,...,Y =ϑ Y , Y ,..., Y k1 k2 kn − k1 − k2 − kn P and (cid:0)Y ,Y ,...,Y (cid:1) =ϑ (cid:0)Y ,Y ,...,Y (cid:1) k1 k2 kn kn kn−1 k1 for all k ,k ,...,k N(cid:0)with n N. (cid:1) (cid:0) (cid:1) 1 2 n ∈ ∈ Lemma 3. If (Xk)k∈N0 is pairwise distinct and (Yk)k∈N is symmetric in space and time, then the distribution of Π = (Π (0),Π (1),...,Π (n 1)) is invariant under the group of d d d − transformations id,A,B,B A , on (S )n with respect to P for each ϑ Θ, i.e. d H { ◦ } ◦ ∈ (cid:8) P P(cid:9) P Π =ϑ A(Π) =ϑ B(Π) =ϑ B A(Π). ◦ Proof. If (Xk)k∈N0 is pairwise distinct then, according to the definition of π, π((Y ,Y ,...,Y )),...,π((Y ,Y ,...,Y )) 1 2 d n n+1 d+n−1 e = A π(( Y , Y ,..., Y )),...,π(( Y , Y ,..., Y )) (cid:0) 1 2 d n (cid:1) n+1 d+n−1 − − − − − − e = B π((Y ,...e,Y ,Y )),...,π((Y ,...,Y ,Y )) (cid:0)(cid:0) d+n−1 n+1 n d 2 1 (cid:1)(cid:1) = B Ae π(( Y ,..., Y , eY )),...,π(( Y ,..., Y , Y )) (cid:0)(cid:0) d+n−1 n+1 n d (cid:1)(cid:1) 2 1 ◦ − − − − − − e e P -a.s. with respect(cid:0)t(cid:0)o each ϑ Θ (see (1)). Further, symmetry in space and(cid:1)(cid:1)time ϑ e ∈ e of (Yk)k∈N implies that (Y1,Y2,...,Yd+n−1),( Y1, Y2,..., Yd+n−1),(Yd+n−1,...,Y2,Y1) − − − and( Y ,..., Y , Y )havethesamedistributionwithrespecttoP foreachϑ Θ, d+n−1 2 1 ϑ − − − ∈ and hence the statement follows. A Rao-Blackwellization. For fixed r S with d N consider the functional p ( ) d r ∈ ∈ · defined by pr(ϑ) := Pϑ(Πd(0) = r) for ϑ Θ. Obviously, if (Yk)k∈N is stationary then the ∈ statistic n−1 1 p = p (Π) := 1 (4) r,n r,n n {Πd(k)=r} k=0 X of Π = (Π (0),Π (1),...,cΠ (n c1)) is an unbiased estimate of p ( ), that is E p = d d d − r · ϑ r,n pr(ϑ) for all ϑ ∈ Θ and n ∈ N. If, additionally, (Xk)k∈N0 is pairwise distinct and(cid:0)(Yk)(cid:1)k∈N is symmetric in space and time then, according to Lemma 3, the statistic c 1 p = p (Π) := p (Π)+p (A(Π))+p (B(Π))+p (B A(Π)) (5) r,n r,n 4 r,n r,n r,n r,n ◦ (cid:16) (cid:17) ofΠisaRao-Blackwellizationofp (seeTheorem3.2.1in[18]). Thisprovesthefollowing c c c r,n c c c Theorem. c Theorem 4. Let r ∈ d∈NSd. If (Xk)k∈N0 is pairwise distinct and (Yk)k∈N is stationary and symmetric in space and time then the estimate p of p ( ) is unbiased and more S r,n r · concentrated in convex order than p , that is r,n c E (ϕ(p ,p (ϑ))) E (ϕ(p ,p (ϑ))) (6) ϑ r,n cr ≤ ϑ r,n r for all ϑ Θ with respect to each function ϕ : [0,1] [0,1] [0, [ with ϕ(p,p) = 0 and ∈ c ×c → ∞ ϕ( ,p) being convex for every p [0,1]. · ∈ 5 According to the strictness of the Jensen inequality for strictly convex functions, if (Xk)k∈N0 is pairwise distinct and (Yk)k∈N is stationary and symmetric in space and time, one has strict inequality in (6) whenever ϕ( ,p) is strictly convex for every p [0,1] and, · ∈ additionally, P p = p > 0 (see [18], Theorem 3.2.1). Since ( p)2 is strictly convex ϑ r,n 6 r,n ·− for every p [0,1], in particular ∈ (cid:0) (cid:1) c c Var (p ) < Var (p ) ϑ r,n ϑ r,n in this case. Note that for r S with d = 1, the estimator p simply estimates the ∈ d c c r,n constant functional p ( ) = 1. r · 2 c Ergodicity. Consider themeasurablespace(Ω′, ′) := (RN, (RN))ofinfinitesequences A B of real numbers and let the mapping T : Ω′ Ω′ be defined by T(ω′) = (ω′,ω′,...) for → 2 3 ω′ = (ω1′,ω2′,...) ∈ Ω′. The process (Yk)k∈N is called ergodic iff (Yk)k∈N is stationary and, additionally, for every A ′ such that T−1(A) = A one has Pϑ((Yk)k∈N A) = 0 or ∈ A ∈ Pϑ((Yk)k∈N A) = 1 for each ϑ Θ. ∈ ∈ As the next Lemma shows, if (Yk)k∈N is ergodic then the estimators of continuous and bounded functionals of ordinal pattern probabilities are strongly consistent and asymp- totically unbiased. Lemma 5. Let r ∈ d∈NSd. If (Yk)k∈N is ergodic and h : [0,1] → R is continuous then S lim h p = h p (ϑ) r,n r n→∞ (cid:0) (cid:1) (cid:0) (cid:1) Pϑ-a.s. for all ϑ Θ. If h is continuoucs and bounded, then with respect to each ϑ Θ ∈ ∈ lim E h p = h p (ϑ) . ϑ r,n r n→∞ (cid:16) (cid:17) (cid:0) (cid:1) (cid:0) (cid:1) Proof. For fixed ϑ Θ let the probabilitcy measure µ on (Ω′, ′) = (RN, (RN)) be defined ∈ A B by µ(A) := Pϑ((Yk)k∈N A) for A ′. Further, let the mapping T : Ω′ Ω′ be given ∈ ∈ A → as above and define f : Ω′ R by → 1 for π((ω′,ω′,...,ω′)) r f(ω′) := 1 2 d ∈ 0 else (cid:26) e for ω′ = (ω′,ω′,...) Ω′. Obviously, f is Borel-measurable and f dµ < , hence, 1 2 ∈ Ω′ | | ∞ according to Birkhoff’s Theorem, if (Yk)k∈N is ergodic then R n−1 1 nl→im∞pr,n = nl→im∞ n f(Tj((Yk)k∈N)) = Eϑ(f((Yk)k∈N)) = pr(ϑ) j=0 X c P -a.s. (see [7], Theorem 1.2.1) and the first statement follows since h is continuous. If ϑ additionally h is bounded then lim E h p = E lim h p = h p (ϑ) ϑ r,n ϑ r,n r n→∞ n→∞ (cid:16) (cid:17) (cid:16) (cid:17) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) according to the dominated cconvergence Theorem. c 6 2.2 Specialization to fBm In the following we specialize the considerations of Subsection 2.1 to equidistant dis- cretizations of fractional Brownian motion (fBm). We start with the definition of fBm. Definition 6. For H ]0,1] let (B(t)) be a mean-zero Gaussian process on a t∈[0,∞[ ∈ probability space (Ω, ,P) with the covariance function A 1 Cov(B(t),B(s)) = t2H +s2H t s 2H Var(B(1)) 2 −| − | (cid:0) (cid:1) for s,t [0, [. Then (B(t)) is called fractional Brownian motion (fBm) with the t∈[0,∞[ ∈ ∞ Hurst parameter H. (cid:3) It is well-known that each fBm possesses a modification with P-a.s. continuous paths (see [9]). We are only interested in the distribution of fBm, so let for the rest of the paper (P ) be a family of probability measures on a measurable space (Ω, ), and H H∈]0,1] A (B(t)) be a family of real-valued random variables on (Ω, ) such that (B(t)) t∈[0,∞[ t∈[0,∞[ A measured with respect to P is fBm with the Hurst parameter H. E.g., (B(t)) can H t∈[0,∞[ be defined as the identity on the set of continuous functions on [0, [. ∞ For the sampling interval length δ > 0 consider the equidistant discretization (Xkδ)k∈N0 := (B(kδ))k∈N0 of fBm. Further, let Yδ := Xδ Xδ for k N. Note that fBm is H-self-similar (see k k − k−1 ∈ [9]), that is, for every H ]0,1] and for all a ]0, [ it holds ∈ ∈ ∞ P (B(at)) =H (aHB(t)) t∈[0,∞[ t∈[0,∞[ P with =H denoting equality of all finite-dimensional distributions with respect to P here. H Since it holds π((y ,y ,...,y )) = π((aHy ,aHy ,...,aHy )) for all a > 0, H ]0,1] and 1 2 d 1 2 d ∈ (y ,y ,...,y ) Rd, one has 1 2 d ∈ e e P (π((Ykδ+1,...,Ykδ+d)))k∈N0 =H (π((Yk1+1,...,Yk1+d)))k∈N0 with respect to each H ]0,1] for all δ > 0. This means that the distribution of ordinal e ∈ e patterns for equidistant discretizations of fBmdoes not depend onthe particular sampling interval length. Therefore, in the following we only consider the discretization (Xk)k∈N0 = (Xk1)k∈N0 with the increment process (Yk)k∈N = (Xk1 −Xk1−1)k∈N. Note that, similarly to the sampling interval length, one can show that the particular scal- ing Var (B(1)) of fBm has no effect on the distribution of ordinal patterns for equidistant H discretizations. Hence, we always assume the case of standard fBm where Var (B(1)) = 1 H for all H ]0,1]. ∈ According to Definition 6, Var (X X ) > 0 for all i,j N with i = j and H ]0,1], H i j 0 − ∈ 6 ∈ and since X X is Gaussian with respect to P for all H ]0,1], the stochastic process i j H − ∈ (Xk)k∈N0 is pairwise distinct. Further, obviously (Yk)k∈N is mean-zero Gaussian with respect to P for each H ]0,1], and by Definition 6 one obtains H ∈ 1 ρ (k) := Cov (Y ,Y ) = k +1 2H 2 k 2H + k 1 2H (7) H H 1 1+k 2 | | − | | | − | (cid:0) (cid:1) 7 for all H ]0,1] and k N0, that is, the stochastic process (Yk)k∈N is stationary. Fur- ∈ ∈ thermore, for all n N and k ,k ,...,k N the random vectors (Y ,Y ,...,Y ), ∈ 1 2 n ∈ k1 k2 kn ( Y , Y ,..., Y ) and (Y ,...,Y ,Y ) have the same covariance structure, hence − k1 − k2 − kn kn k2 k1 (Yk)k∈N is space and time symmetric. Consequently, the conclusion of Theorem 4 applies to the estimation of ordinal pattern probabilities for equidistant discretizations of fBm. As the following Lemma shows, one has strict inequality in (6) whenever ϕ( ,p) is strictly · convex for every p [0,1] and H ]0,1[ (compare to the remark after Theorem 4). ∈ ∈ Lemma 7. If H ]0,1[ then P p = p > 0 for all r S and n N. ∈ H r,n 6 r,n ∈ d∈N d ∈ Proof. Let r S with d N and(cid:0)n N befix(cid:1)ed. It iseasy to cSonstruct a sequence ofper- ∈ d ∈ c∈ c mutations (r(1),r(2),...,r(n)) (S )n such that the (n+d 1)-dimensional Lebesgue mea- d sure of n (y ,y ,...,y ∈ ) Rn+d−1 π((y ,y ,−...,y )) = r(k) is strictly k=1 1 2 n+d−1 ∈ | k k+1 k+d−1 positive and T (cid:8) (cid:9) ♯r ♯ k 0,1,...,n 1 r(k) = r =e ♯ k 0,1,...,n 1 r(k) r . · ∈ { − }| 6 ∈ { − }| ∈ According t(cid:8)o (7), if H ]0,1[ then (Y ,Y (cid:9),...,Y (cid:8) ) is non-degenerate Gaussia(cid:9)n with 1 2 n+d−1 ∈ respect to P and, consequently, P (Y ,Y ,...,Y ) A > 0 for each Borel-set H H 1 2 n+d−1 ∈ A Rd+n−1 with strictly positive (n + d 1)-dimensional Lebesgue measure. Since ⊂ (cid:0) − (cid:1) p = 1 p (see (4) and (5)) the statement follows. r,n ♯r s∈r s,n Note thatPif H = 1 then P p = p > 0 if and only if r (0,1,...,d),(d,...,1,0) . c c H r,n 6 r,n ∈ { } It is well-known that for ea(cid:0)ch H ]0,(cid:1)1[ the spectral distribution function of (Yk)k∈N is ∈ absolutely continuous (see [4c]) whicch is a sufficient condition for (Yk)k∈N to be ergodic with respect to P (see [7], Theorem 14.2.1). In the case H = 1 one has X = kX H k 1 P -a.s. for every k N (see [9]), hence H 0 ∈ 0 if r S (0,1,...,d),(d,...,1,0) p = p (H) = ∈ d∈N d \{ } (8) rn r ( 21 if r ∈ Sd∈N{(cid:0)(0,1,...,d),(d,...,1,0)} (cid:1) P -a.s. fobr all n N. Consequently, theSconclusions of Lemma 5 apply to the estimation H ∈ of ordinal pattern probabilities for equidistant discretizations of fBm. Asymptotic normality. Next, we discuss asymptotic normality of the estimators of ordinal pattern probabilities for equidistant discretizations of fBm. We restrict our con- P sideration to the ‘reasonable’ estimators as provided by Theorem 4. Let H denote con- −→ vergence in distribution with respect to P , and N(µ,σ2) be the normal distribution with H meanµ Randvarianceσ2 > 0. Givenamean-zeroGaussianvectorZ = (Z ,Z ,...,Z ) 1 2 d ∈ on a probability space (Ω′, ′,P), and a Borel-measurable function f : Rd R such that A → Var f(Z) < , define the rank of f with respect to Z by ∞ (cid:0)rank((cid:1)f) := min κ N : There exists a real polynomial q : Rd R ∈ → of degree κ with E [f(Z) E(f(Z))]q(Z) = 0 , (cid:8) − 6 where the minimum of the empty set is infinity. W(cid:0)e write f(k) g(k) for(cid:1)map(cid:9)pings f,g ∼ from N onto R and say that f is asymptotically equivalent to g iff lim f(k)/g(k) = 1 0 k→∞ where 0 := 1. Note that for ρ (k) as defined in (7), one has the asymptotic equivalence 0 H ρ (k) H(2H 1)k2H−2 (9) H ∼ − 8 for each H ]0,1] (see [9]). Further, we write f(k) = O(g(k)) iff sup f(k)/g(k) < , ∈ k∈N0 ∞ and f(k) = o(g(k)) iff lim f(k)/g(k) = 0. k→∞ Theorem 8. If H < 3 then 4 Var (p ) −21 p p (H) PH N(0,1) H r,n r,n − r −→ for all r S with d (cid:0)N 1 . (cid:1) (cid:0) (cid:1) ∈ d ∈ \{ c} c Proof. Let r S with d N 1 be fixed and define f : Rd R by d ∈ ∈ \{ } → 1 for π((y ,y ,...,y )) r f((y ,y ,...,y )) := 1 2 d ∈ 1 2 d 0 else (cid:26) e for (y ,y ,...,y ) Rd. Write Y = (Y ,Y ,...,Y ). We first show that rank(f) > 1 1 2 d 1 2 d ∈ with respect to Y and P for all H ]0,1]. Obviously, f is Borel-measurable, and H ∈ VarH f(Y) < for all H ]0,1]. Now, let i 1,2,...,d be fixed. Because (Yk)k∈N is ∞ ∈ ∈ { } symmetric inspace andtime, therandom vectors (Y ,Y ,...,Y ) and( Y , Y ,..., Y ) 1 2 d 1 2 d (cid:0) (cid:1) − − − have the same distribution with respect to P for all H ]0,1], and hence, according to H ∈ the definition of α (see (2)) EH 1{πe(Y)=s}Yi +EH 1{πe(Y)=α(s)}Yi (cid:0) = E(cid:1)H 1{πe((cid:0)Y)=s}Yi EH(cid:1)1{πe(−Y)=s}( Yi) = 0 (10) − − for each s S . Since Y is mean-z(cid:0)ero Gaussia(cid:1)n with(cid:0)respect to P fo(cid:1)r all H ]0,1], one d i H ∈ ∈ has E (f(Y))E (Y ) = 0, and this together with (10) yields in case ♯r = 2 H H i EH [f(Y) EH(f(Y))]Yi = EH 1{πe(Y)=r}Yi +EH 1{πe(Y)=α(r)}Yi = 0 − (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) with respect to each H ]0,1], and in case ♯r = 4 ∈ EH [f(Y) EH(f(Y))]Yi = EH 1{πe(Y)=r}Yi +EH 1{πe(Y)=α(r)}Yi − (cid:0) (cid:1) + EH(cid:0) 1{πe(Y)=β(r)(cid:1)}Yi +E(cid:0)H 1{πe(Y)=β◦α((cid:1)r)}Yi = 0 with respect to each H ]0,1]. Conseque(cid:0)ntly, rank(f) >(cid:1) 1 wit(cid:0)h respect to Y a(cid:1)nd P for H ∈ all H ]0,1]. According to (9), ρ (k) rank(f) = O k4H−4 for each H ]0,1], thus H ∈ | | ∈ ∞ (cid:0) (cid:1) ρ (k) rank(f) < H | | ∞ k=0 X for H < 3. Since, by definition one has 1 n−1f((Y ,Y ...,Y )) = p for every 4 n k=0 k+1 k+2 k+d r,n n N, the statement follows from Theorem 4 of Arcones [1]. ∈ P c Note that Theorem 8 can also be proven by the Central Limit Theorem for non-instanta- neous filters of a stationary Gaussian process given by Ho and Sun [10]. We leave it as an open question whether H < 3 is also a necessary condition for p to be 4 r,n asymptotically normally distributed. Indeed, at least for r S with d = 2 simulations d ∈ suggest that p is not asymptotically normally distributed with respect to P if H 3 r,n Hc ≥ 4 (see Figure 3 below). c 9 3 Estimating the probability of a change 3.1 Ordinal patterns of order d=2 Theresults ofthisSectionaremainlybasedontheanalysis ofnormalorthantprobabilities as given by the following definition. Definition 9. Let n N be fixed. For a non-singular strictly positive definite and ∈ symmetric matrix Σ Rn×n let φ(Σ, ) denote the Lebesgue density of the n-dimensional ∈ · normal distribution with zero means and the covariance matrix Σ, that is φ(Σ,x) = (2π)n Σ −21 exp 1xTΣ−1x | | {−2 } (cid:0) (cid:1) for x Rn. We call ∈ Φ(Σ) := φ(Σ,x)dx, (11) Z[0,∞[n the n-dimensional normal orthant probability with respect to Σ. (cid:3) Clearly, if (Z ,Z ,...,Z ) is a non-degenerate mean-zero Gaussian random vector on a 1 2 n probability space (Ω′, ′,P), then A Φ (Cov(Z ,Z ))n = P(Z > 0,Z > 0,...,Z > 0). i j i,j=1 1 2 n (cid:0) (cid:1) The following result is well-known (see [19]). Lemma 10. If (Z ,Z ,...,Z ) is a non-degenerate mean-zero Gaussian random vector 1 2 n on a probability space (Ω′, ′,P) such that Var(Z ) > 0 for all k N, then k A ∈ 1 1 P(Z > 0,Z > 0) = + arcsinρ , 1 2 12 4 2π 1 1 1 P(Z > 0,Z > 0,Z > 0) = + arcsinρ + arcsinρ 1 2 3 12 13 8 4π 4π 1 + arcsinρ 23 4π where ρ = Corr(Z ,Z ) for i,j 1,2,3 . ij i j ∈ { } Note that, in general, no closed-form expressions are available for normal orthant proba- bilities of dimension n 4. ≥ Same as in Subsection 2.2, let (Xk)k∈N0 be an equidistant discretization of fBm with the increment process (Yk)k∈N = (Xk Xk−1)k∈N. The following statement is due to Bandt − and Shiha [3]. We refer to parts of the proof below, and thus include it here. Corollary 11. For H ]0,1] one has ∈ 1 arcsin2H−1 if r (0,1,2),(2,1,0) p (H) = π ∈ { } . r ( 1 1 arcsin2H−1 if r (1,0,2),(1,2,0),(0,2,1),(2,0,1) 4 − 2π ∈ { } 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.