Iterated scaling limits for aggregation of random coefficient AR(1) and INAR(1) processes Fanni Ned´enyi , , Gyula Pap ∗⋄ ∗ * Bolyai Institute, University of Szeged, Aradi v´ertanu´k tere 1, H–6720 Szeged, Hungary. 6 1 e–mails: [email protected] (F. Ned´enyi), [email protected] (G. 0 2 Pap). n Corresponding author. a ⋄ J 8 1 Abstract ] R We discuss joint temporal and contemporaneous aggregation of N inde- P pendent copies of strictly stationary AR(1) and INteger-valued AutoRegres- . h sive processes of order 1 (INAR(1)) with random coefficient α (0,1) and t ∈ a idiosyncratic innovations. Assuming that α has a density function of the m form ψ(x)(1 x)β, x (0,1), with lim ψ(x) = ψ (0, ), differ- x↑1 1 − ∈ ∈ ∞ [ ent Brownian limit processes of appropriately centered and scaled aggregated 1 partial sums are shown to exist in case β = 1 when taking first the limit v 9 as N and then the time scale n , or vice versa. This paper → ∞ → ∞ 7 completes the one of Pilipauskaite˙ and Surgailis [4], and Barczy, Ned´enyi and 6 Pap [1], where the iterated limits are given for every other possible value of 4 0 the parameter β for the two types of models. . 1 0 6 1 1 Introduction : v i X The aggregation problem is concerned with the relationship between individual (mi- r a cro) behavior and aggregate (macro) statistics. There exist different types of aggre- gation. The scheme of contemporaneous (also called cross-sectional) aggregation of random-coefficient AR(1) models was firstly proposed by Robinson [8] and Granger [2] in order to obtain the long memory phenomena in aggregated time series. Puplinskaite˙ and Surgailis [5, 6] discussed aggregation of random-coefficient AR(1) processes with infinite variance and innovations in the domain of attraction of a stable law. Related problems for some network traffic models, M/G/ queues ∞ with heavy-tailed activity periods, and renewal-reward processes have also been ex- amined. On page 512 in Jirak [3] one can find many references for papers dealing 2010 Mathematics Subject Classifications: 60F05, 60J80,60G15. Key words and phrases: random coefficient AR(1) processes, random coefficient INAR(1) processes, temporal aggregation,contemporaneous aggregation,idiosyncratic innovations. 1 with the aggregation of continuous time stochastic processes, and the introduction of Barczy, Ned´enyi and Pap [1] contains a detailed overview on the topic. The aim of the present paper is to complete the papers of Pilipauskaite˙ and Surgailis[4]andBarczy, Ned´enyi andPap[1]bygiving theappropriateiteratedlimit theorems for both the randomized AR(1) and INAR(1) models when the parameter β = 1, which case is not investigated in both papers. Let Z , N, R and R denote the set of non-negative integers, positive + + integers, real numbers and non-negative real numbers, respectively. The paper of Pilipauskaite˙ and Surgailis [4] discusses the limit behavior of sums N ⌊nt⌋ (1.1) S(N,n) := X(j), t R , N,n N, t k ∈ + ∈ j=1 k=1 XX where (Xk(j))k∈Z+, j ∈ N, are independent copies of a stationary random-coefficient AR(1) process (1.2) X = αX +ε , k N, k k−1 k ∈ with standardized independent and identically distributed (i.i.d.) innovations (εk)k∈N having E(ε1) = 0 and Var(ε1) = 1, and a random coefficient α with values in [0,1), being independent of (εk)k∈N and admitting a probability density function of the form (1.3) ψ(x)(1 x)β, x [0,1), − ∈ where β ( 1, ) and ψ is an integrable function on [0,1) having a limit ∈ − ∞ lim ψ(x) = ψ > 0. Here the distribution of X is chosen as the unique sta- x↑1 1 0 tionary distribution of the model (1.2). Its existence was shown in Puplinskaite˙ and Surgailis [5, Proposition 1]. We point out that they considered so-called idiosyn- cratic innovations, i.e., the innovations (ε(kj))k∈N, j ∈ N, belonging to (Xk(j))k∈Z+, j N, areindependent. In Pilipauskaite˙ andSurgailis [4] they derived scaling limits ∈ of the finite dimensional distributions of (A−N1,nSt(N,n))t∈R+, where AN,n are some scaling factors and first N and then n , or vice versa, or both N and → ∞ → ∞ n increase to infinity, possibly with different rates. The iterated limit theorems for both orders of iteration are presented in the paper of Pilipauskaite˙ and Surgailis [4], in Theorems 2.1 and 2.3, along with results concerning simultaneous limit theorems in Theorem 2.2 and 2.3. We note that the theorems cover different ranges of the possible values of β ( 1, ), namely, β ( 1,0), β = 0, β (0,1), and ∈ − ∞ ∈ − ∈ β > 1. Among the limit processes is a fractional Brownian motion, lines with ran- dom slopes where the slope is a stable variable, a stable L´evy process, and a Wiener process. Our paper deals with the missing case when β = 1, for both two orders of iteration. 2 The paper of Barczy, Ned´enyi and Pap [1] discusses the limit behavior of sums (1.1), where (Xk(j))k∈Z+, j ∈ N, are independent copies of a stationary random-coefficient INAR(1) process. The usual INAR(1) process with non-random- coefficient is defined as Xk−1 (1.4) X = ξ +ε , k N, k k,j k ∈ j=1 X where (εk)k∈N are i.i.d. non-negative integer-valued random variables, (ξk,j)k,j∈N are i.i.d. Bernoulli random variables with mean α [0,1], and X is a non- 0 ∈ negative integer-valued random variable such that X0, (ξk,j)k,j∈N and (εk)k∈N are independent. By using the binomial thinning operator α due to Steutel and van ◦ Harn [9], the INAR(1) model in (1.4) can be considered as (1.5) X = α X +ε , k N, k k−1 k ◦ ∈ which form captures the resemblance with the AR(1) model. We note that an INAR(1) process can also be considered as a special branching process with immi- gration having Bernoulli offspring distribution. We will consider a certain randomized INAR(1) process with randomized thin- ning parameter α, given formally by the recursive equation (1.5), where α is a random variable with values in (0,1). This means that, conditionally on α, the process (Xk)k∈Z+ isanINAR(1)processwiththinning parameter α. Conditionally on α, the i.i.d. innovations (εk)k∈N are supposed to have a Poisson distribution with parameter λ (0, ), and the conditional distribution of the initial value X 0 ∈ ∞ given α is supposed to be the unique stationary distribution, namely, a Poisson distribution with parameter λ/(1 α). For a rigorous construction of this process − see Section 4 of Barczy, Ned´enyi and Pap [1]. The iterated limit theorems for both orders of iteration —that are analogous to the ones in case of the randomized AR(1) model— are presented in the latter paper, in Theorems 4.6-4.12. This paper deals with the missing case when β = 1, for both two orders of iteration. 2 Iterated aggregation of randomized INAR(1) processes with Poisson innovations Let α(j), j N, beasequence ofindependent copiesoftherandomvariable α, and ∈ let (Xk(j))k∈Z+, j ∈ N, be a sequence of independent copies of the process (Xk)k∈Z+ with idiosyncratic innovations (i.e., the innovations (ε(kj))k∈N, j ∈ N, belonging to (Xk(j))k∈Z+, j ∈ N, are independent) such that (Xk(j))k∈Z+ conditionally on α(j) is a strictly stationary INAR(1) process with Poisson innovations for all j N. ∈ 3 First we examine a simple aggregation procedure. For each N N, consider ∈ the stochastic process S(N) = (Sk(N))k∈Z+ given by N N e e λ S(N) := X(j) E(X(j) α(j)) = X(j) , k Z . k k − k | k − 1 α(j) ∈ + Xj=1 (cid:0) (cid:1) Xj=1(cid:16) − (cid:17) e The following two propositions are Proposition 4.1 and 4.2 of Barczy, Ned´enyi and Pap [1]. We will use Df or -lim for the weak convergence of the finite f −→ D dimensional distributions. 2.1 Proposition. If E 1 < , then 1−α ∞ (cid:0) (cid:1) N−12S(N) Df as N , −→ Y → ∞ where (Yk)k∈Z+ is a stationarey Gaussiean process with zero mean and covariances (2.1) eE( ) = Cov X λ ,X λ = λE αk , k Z . 0 k 0 k + Y Y − 1 α − 1 α 1 α ∈ (cid:18) − − (cid:19) (cid:16) − (cid:17) 2.2 Proposeitieon. We have ⌊nt⌋ ⌊nt⌋ λ(1+α) n−21 S(1) = n−12 (X(1) E(X(1) α(1))) Df B k k − k | −→ 1 α (cid:18) k=1 (cid:19)t∈R+ (cid:18) k=1 (cid:19)t∈R+ p − X X e as n → ∞, where B = (Bt)t∈R+ is a standard Brownian motion, independent of α. In the forthcoming theorems we assume that the distribution of the random variable α, i.e., themixing distribution, hasaprobability density described in(1.3). We note that the form of this density function indicates β > 1. Furthermore, if − α has such a density function, then for each ℓ N the expectation E((1 α)−ℓ) ∈ − is finite if and only if β > ℓ 1. − For each N,n ∈ N, consider the stochastic process S(N,n) = (St(N,n))t∈R+ given by N ⌊nt⌋ e e S(N,n) := X(j) E(X(j) α(j)) , t R . t k − k | ∈ + j=1 k=1 XX(cid:0) (cid:1) e 2.3 Theorem. If β = 1, then f-lim f- lim (nlogn)−21N−12 S(N,n) = 2λψ1B, D n→∞ D N→∞ p where B = (Bt)t∈R+ is a standard Wiener proceess. 4 Proof of Theorem 2.3. Since E((1 α)−1) < , the condition in Proposition − ∞ 2.1 is satisfied, meaning that N−12S(N) Df as N , −→ Y → ∞ where (Yk)k∈Z+ is a stationarey Gaussiaen process with zero mean and covariances λ λ αk E( e ) = Cov X ,X = λE , k Z . 0 k 0 k + Y Y − 1 α − 1 α 1 α ∈ (cid:18) − − (cid:19) (cid:16) − (cid:17) Thereforee, eit suffices to show that ⌊nt⌋ 1 -lim = 2λψ B, f k 1 D n→∞ √nlogn Y k=1 X p e where B = (Bt)t∈R+ is a standard Wiener process. This follows from the continuity theorem if for all t ,t N we have 1 2 ∈ 1 ⌊nt1⌋ 1 ⌊nt2⌋ (2.2) Cov , 2λψ min(t ,t ), k k 1 1 2 √nlogn Y √nlogn Y → k=1 k=1 X X e e as n . By (2.1) we have → ∞ 1 ⌊nt1⌋ 1 ⌊nt2⌋ λ ⌊nt1⌋⌊nt2⌋ α|k−ℓ| Cov , = E k k √nlogn Y √nlogn Y nlogn 1 α k=1 k=1 k=1 ℓ=1 − X X X X e e λ 1⌊nt1⌋⌊nt2⌋ a|k−ℓ| = ψ(a)(1 a)da. nlogn 1 a − Z0 k=1 ℓ=1 − X X First we derive 1 1⌊nt1⌋⌊nt2⌋ (2.3) a|k−ℓ|da 2min(t ,t ), 1 2 nlogn → Z0 k=1 ℓ=1 X X as n . Indeed, if we suppose that t > t , then 2 1 → ∞ 1⌊nt1⌋⌊nt2⌋ ⌊nt1⌋⌊nt2⌋ 1 a|k−ℓ|da = k ℓ +1 Z0 k=1 ℓ=1 k=1 ℓ=1 | − | X X X X = ( nt +1)(H( nt ) 1)+2 nt + nt (H( nt ) 1) 1 1 1 1 2 ⌊ ⌋ ⌊ ⌋ − −⌊ ⌋ ⌊ ⌋ ⌊ ⌋ − + nt nt +1 (H( nt ) H( nt nt +1)) 2 1 2 2 1 ⌊ ⌋−⌊ ⌋ ⌊ ⌋ − ⌊ ⌋−⌊ ⌋ = ( n(cid:0)t +1)(log( nt (cid:1))+O(1))+2 nt + nt (log nt +O(1)) 1 1 1 1 2 ⌊ ⌋ ⌊ ⌋ −⌊ ⌋ ⌊ ⌋ ⌊ ⌋ + nt nt +1 (log( nt ) log( nt nt +1)+O(1)), 2 1 2 2 1 ⌊ ⌋−⌊ ⌋ ⌊ ⌋ − ⌊ ⌋−⌊ ⌋ (cid:0) (cid:1) 5 where H(n) denotes the n-th harmonic number, and it is well known that H(n) = logn+O(1) for every n N. Therefore, convergence (2.3) holds. Consequently, ∈ (2.2) will follow from 1 1⌊nt1⌋⌊nt2⌋ I := a|k−ℓ| ψ(a) ψ da 0 n 1 nlogn | − | → Z0 k=1 ℓ=1 X X as n . Note that for every ε > 0 there is a δ > 0 such that for every ε → ∞ a (1 δ ,1) it holds that ψ(a) ψ < ε. Hence ε 1 ∈ − | − | 1−δε⌊nt1⌋⌊nt2⌋ 1 ⌊nt1⌋⌊nt2⌋ nlognI 6 a|k−ℓ|(ψ(a)+ψ )da+ a|k−ℓ| ψ(a) ψ da n 1 1 | − | Z0 k=1 ℓ=1 Z1−δε k=1 ℓ=1 X X X X 1−δε 2 nt 1 ⌊nt1⌋⌊nt2⌋ 6 ⌊ 1⌋(ψ(a)+ψ )da+ε a|k−ℓ|da, 1 δ Z0 ε Z1−δε k=1 ℓ=1 X X meaning that for every ε > 0 by (2.3) we have limsup I 6 0 + n→∞| n| ε4ψ min(t ,t ), resulting that lim I = 0, which completes the proof. ✷ 1 1 2 n→∞ n 2.4 Theorem. If β = 1, then 1 - lim -lim S(N,n) = λψ B, f f 1 D N→∞ D n→∞ √nN logN p where B = (Bt)t∈R+ is a standard Wiener proceess. Proof of Theorem 2.4. By the second proof of Theorem 4.9 of Barczy, Ned´enyi and Pap [1] it suffices to show that 1 N λ(1+α(j)) D λψ , N . N logN (1 α(j))2 −→ 1 → ∞ j=1 − X Let us apply Theorem 7.1 of Resnick [7] with 1 λ(1+α(j)) X := , N,j N (1 α(j))2 − meaning that λ(1+α) 1 N P(X > x) = N P > Nx = N ψ(a)(1 a)da, N,1 (1 α)2 − (cid:18) − (cid:19) Z1−h(λ,Nx) e where h(λ,x) = (1/4+ 1/16+x/(2λ))−1. Note that for every ε > 0 there is a δ > 0 such that for every a (1 δ ,1) it holds that ψ(a) ψ < ε. Then, ε p ∈ − ε | − 1| e 1 (h(λ,Nx))2 ελ N ψ(a) ψ (1 a)da 6 Nε 6 1 | − | − 2 x Z1−h(λ,Nx) e e 6 for every x > 0 and large enough N. Therefore, for every x > 0 we have 1 lim N P(X > x) = lim N ψ (1 a)da N,1 1 N→∞ N→∞ Z1−h(λ,Nx) − e (h(λ,Nx))2 ψ N ψ λ 1 1 = lim Nψ = lim = =: ν([x, )), N→∞ 1 2 N→∞ 2 1 + 1 + Nx 2 x ∞ e 4 16 2λ (cid:16) q (cid:17) where ν is obviously a L´evy-measure. By the decomposition 1−h(λ,Nε) λ(1+a) 2 N E X2 = N e ψ(a)(1 a)da = I(1) +I(2), N,11{|XN,1|6ε} N(1 a)2 − N N Z0 (cid:18) − (cid:19) (cid:0) (cid:1) where 1−δε λ(1+a) 2 1 22 I(1) := N ψ(a)(1 a)da 6 λ2 1 0 N N(1 a)2 − N δ4 → Z0 (cid:18) − (cid:19) ε as N , and → ∞ 1−h(λ,Nε) λ(1+a) 2 I(2) := N e ψ(a)(1 a)da N N(1 a)2 − Z1−δε (cid:18) − (cid:19) 8ψ λ2 1−h(λ,Nε) da 4ψ λ2 6 1 e = 1 h(λ,Nε)−2 δ−2 6 8ψ λ2ε N (1 a)3 N − ε 1 Z1−δε − h i for large enough N values, so it follows that e limlimsupN E X2 = 0. ε→0 N→∞ N,11{|XN,1|6ε} (cid:0) (cid:1) Therefore, by applying Theorem 7.1 of Resnick [7] with the choice t = 1 we get that N λ(1+α(j)) λ(1+α) E j=1 (cid:20)N(1−α(j))2 − (cid:18)N(1−α)21nNλ((11−+αα))261o(cid:19)(cid:21) X N λ(1+α(j)) λψ1 1−√2Nλ 2 = (1 a)da N(1 α(j))2 − N (1 a)2 − j=1 " − Z0 − X + λψ1 1−√2Nλ 2 (1 a)da λψ1 1−eh(λ,N) 2 (1 a)da N (1 a)2 − − N (1 a)2 − Z0 − Z0 − λψ 1−h(λ,N) 2 λψ 1−h(λ,N) 1+a + 1 e (1 a)da 1 e (1 a)da N (1 a)2 − − N (1 a)2 − Z0 − Z0 − λψ 1−h(λ,N) 1+a λ 1−h(λ,N) 1+a 1 + e (1 a)da e ψ(a)(1 a)da N (1 a)2 − − N (1 a)2 − Z0 − Z0 − # N λ (0) (1) (2) (3) D =: J +λJ +λJ +λJ X , N j,N N N N −→ 0 j=1 X 7 where by (5.37) of Resnick [7] ∞ ψ λdx 1 ψ λdx E(eiθX0) = exp (eiθx 1) 1 + (eiθx 1 iθx) 1 , θ R. − x2 − − x2 ∈ (cid:26)Z1 Z0 (cid:27) We show that (1) (2) (3) J + J + J | N | | N | | N | 0, N , logN → → ∞ resulting 1 N λ(1+α(j)) 1 N λ(1+α(j)) λψ1 1−√2Nλ 2 = da logN N(1 α(j))2 logN N(1 α(j))2 − N 1 a j=1 − j=1 " − Z0 − # X X 2λψ 2λ 1 D + log 0 X +λψ = λψ , N . 0 1 1 logN − N −→ · → ∞ r !! Indeed, J(1) ψ 1−h(λ,N) 2 2ψ 2λ 1 1 N N = 1 e da = 1 log + + logN logN Z1−√2Nλ 1−a logN rN 4 r16 2λ!! converges to 0 as N . Moreover, → ∞ J(2) ψ 1−h(λ,N) 1 a ψ 1 N = 1 e − (1 a)da = 1 1 logN logN Z0 (1−a)2 − logN − 1 + 1 + N 4 16 2λ q converges to 0 as N . Finally, → ∞ J(3) 1 1−h(λ,N) 1+a N = e (ψ ψ(a))da 1 logN logN 1 a − (cid:12)(cid:12) (cid:12)(cid:12) (cid:12)(cid:12) Z0 − (cid:12)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12)(cid:12) 1(cid:12)(cid:12) (cid:12)(cid:12)1−δε 2 1 1−h(λ(cid:12)(cid:12),N) 2 6 (ψ +ψ(a))da+ e εda 1 logN δ logN 1 a Z0 ε Z1−δε − 1 2 2ε 1 1 N 6 (ψ +δ−1)+ logδ +log + + . , logN δ 1 ε logN ε 4 16 2λ ε " r ! # One can easily see that for all ε > 0, we get limsup J(3)/logN 6 0 + ε, N→∞| N | resulting that lim J(3)/logN = 0, which completes the proof. ✷ N→∞ N 3 Iterated aggregation of randomized AR(1) pro- cesses with Gaussian innovations Let α(j), j N, be a sequence of independent copies of the random variable α, ∈ and let (Xk(j))k∈Z+, j ∈ N, be a sequence of independent copies of the process 8 (j) (Xk)k∈Z+ with idiosyncratic Gaussian innovations (i.e., the innovations (εk )k∈Z+, j ∈ N, belonging to (Xk(j))k∈Z+, j ∈ N, are independent) having zero mean and variance σ2 ∈ R+ such that (Xk(j))k∈Z+ conditionally on α(j) is a strictly stationary AR(1) process for all j N. A rigorous construction of this random- ∈ coefficient process can be given similarly as in case of the randomized INAR(1) process detailed in Section 4 of Barczy, Ned´enyi and Pap [1]. First we examine a simple aggregation procedure. For each N N, consider ∈ the stochastic process S(N) = (Sk(N))k∈Z+ given by N e e S(N) := X(j), k Z . k k ∈ + j=1 X e The following two propositions are the counterparts of Proposition 2.1 and 2.2, and can be proven similarly as the two concerning the randomized INAR(1) process. 3.1 Proposition. If E 1 < , then 1−α2 ∞ (cid:0) (cid:1) N−12S(N) Df as N , −→ Y → ∞ where (Yk)k∈Z+ is a stationarey Gaussiean process with zero mean and covariances αk e E( ) = Cov(X ,X ) = σ2E , k Z . Y0Yk 0 k 1 α2 ∈ + (cid:16) − (cid:17) 3.2 Propositione. We e have ⌊nt⌋ ⌊nt⌋ σ n−21 S(1) = n−21 X(1) Df B k k −→ 1 α (cid:18) k=1 (cid:19)t∈R+ (cid:18) k=1 (cid:19)t∈R+ − X X e as n → ∞, where B = (Bt)t∈R+ is a standard Brownian motion, independent of α. Again, we assume that the distribution of the random variable α has a prob- ability density described in (1.3). Note that for each ℓ N the expectation ∈ E((1 α2)−ℓ) is finite if and only if β > ℓ 1. − − For each N,n ∈ N, consider the stochastic process S(N,n) = (St(N,n))t∈R+ given by N ⌊nt⌋ e e S(N,n) := X(j), t R . t k ∈ + j=1 k=1 XX e 3.3 Theorem. If β = 1, then f-lim f- lim (nlogn)−21N−12 S(N,n) = σ2ψ1B, D n→∞ D N→∞ p where B = (Bt)t∈R+ is a standard Wiener proceess. 9 Proof of Theorem 3.3. Since E((1 α2)−1) < , the condition in Proposition − ∞ 3.1 is satisfied, meaning that N−21S(N) Df as N , −→ Y → ∞ where (Yk)k∈Z+ is a stationarey Gaussiaen process with zero mean and covariances αk e E( ) = Cov(X ,X ) = σ2E , k Z . Y0Yk 0 k 1 α2 ∈ + (cid:16) − (cid:17) Therefore, it suffices to show that e e ⌊nt⌋ 1 -lim = σ2ψ B, f k 1 D n→∞ √nlogn Y k=1 X p e where B = (Bt)t∈R+ is a standard Wiener process. This follows from the continuity theorem, if for all t ,t N we have 1 2 ∈ 1 ⌊nt1⌋ 1 ⌊nt2⌋ Cov , σ2ψ min(t ,t ), n . k k 1 1 2 √nlogn Y √nlogn Y → → ∞ k=1 k=1 X X e e It is known that 1 ⌊nt1⌋ 1 ⌊nt2⌋ σ2 ⌊nt1⌋⌊nt2⌋ α|k−ℓ| Cov , = E k k √nlogn Y √nlogn Y nlogn 1 α2 k=1 k=1 k=1 ℓ=1 − X X X X e e σ2 1⌊nt1⌋⌊nt2⌋ a|k−ℓ| = ψ(a)(1 a)da nlogn 1 a2 − Z0 k=1 ℓ=1 − X X σ2 1⌊nt1⌋⌊nt2⌋ σ2 1⌊nt1⌋⌊nt2⌋ a|k−ℓ|+1 = a|k−ℓ|ψ(a)da ψ(a)da nlogn − nlogn 1+a Z0 k=1 ℓ=1 Z0 k=1 ℓ=1 X X X X It was shown in the proof of Theorem 2.3 that σ2 1⌊nt1⌋⌊nt2⌋ a|k−ℓ|ψ(a)da 2σ2ψ min(t ,t ), n . 1 1 2 nlogn → → ∞ Z0 k=1 ℓ=1 X X We are going to prove that σ2 1⌊nt1⌋⌊nt2⌋ a|k−ℓ|+1 σ2 1⌊nt1⌋⌊nt2⌋ a|k−ℓ| ψ(a)da ψ(a)da nlogn 1+a − nlogn 1+a Z0 k=1 ℓ=1 Z0 k=1 ℓ=1 X X X X converges to 0 as n , which proves our theorem. Indeed, if t > t , then 2 1 → ∞ ⌊nt1⌋⌊nt2⌋ a|k−ℓ|+1 a|k−ℓ| 1 ⌊nt1⌋ = ak (a+1)+a⌊nt2⌋−k+1 (cid:12) 1+a − 1+a (cid:12) 1+a (cid:12) − (cid:12) (cid:12)k=1 ℓ=1 (cid:18) (cid:19)(cid:12) (cid:12)k=1 (cid:12) (cid:12)X X (cid:12) (cid:12)X (cid:0) (cid:1)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) 1 a(a⌊nt1⌋ 1) (cid:12) (cid:12)a⌊nt2⌋+1 a⌊nt2⌋−⌊nt1⌋+1 (cid:12) (cid:12)= − (a+(cid:12)1) nt +(cid:12) − 6 4 n(cid:12)t , 1 2 1+a a 1 − ⌊ ⌋ a 1 ⌊ ⌋ (cid:12) − − (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) 10 (cid:12)