ebook img

Guillaume Chevillon, Alain Hecq, Sébastien Laurent Long Memory Through Marginalization of ... PDF

32 Pages·2015·0.69 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Guillaume Chevillon, Alain Hecq, Sébastien Laurent Long Memory Through Marginalization of ...

Guillaume Chevillon, Alain Hecq, Sébastien Laurent Long Memory Through Marginalization of Large Systems and Hidden Cross- Section Dependence RM/15/014 Long Memory Through Marginalization of Large Systems and Hidden Cross-Section Dependence This Version: May 7, 2015 Guillaume Chevillona, Alain Hecqb, S´ebastien Laurentc aESSEC Business School & CREST, France bDepartment of Quantitative Economics, Maastricht University, The Netherlands cAix-Marseille University (Aix-Marseille School of Economics), CNRS & EHESS, Aix-Marseille Graduate School of Management – IAE, France Abstract This paper shows that large dimensional vector autoregressive (VAR) models of finite order can generatelongmemoryinthemarginalizedunivariateseries. Wederivehigh-levelassumptionsunder whichthefinalequationrepresentationofaVAR(1)leadstounivariatefractionalwhitenoisesand verify the validity of these assumptions for two specific models. We consider the implications of our findings for the variances of asset returns where the so-called golden-rule of realized variances states that they tend always to exhibit fractional integration of a degree close to 0.4. Keywords: Long memory, Vector Autoregressive Model, Marginalization, Final Equation Representation, Volatility. JEL: C10, C32, C58. (cid:73)We would like to thank participants at the 6th French Econometrics Conference celebrating Christian Gouri´eroux’s contribution to econometrics, the 2014 Econometrics Conference at the Oxford Martin School, CFE 2013, the 15th OxMetrics User Conference at CASS Business School, the 2015 Symposium of the SNDE, Aix- Marseille University and CREATES seminars, and in particular Karim Abadir, Richard Baillie, David Hendry, JurgenDoornik,SophoclesMavroeidis,BentNielsen,AndersRabhekandPaoloZaffaroni. Email addresses: [email protected](GuillaumeChevillon),[email protected](Alain Hecq),[email protected](S´ebastienLaurent) Preprint submitted to Elsevier May 7, 2015 1. Introduction and motivations Long memory is commonly observed in economics and finance, dating back at least to Smith (1938), Cox and Townsend (1947) and Granger (1966), but its origin is unclear as argued by Cox (2014). Mu¨ller and Watson (2008) show that this is probably due to the fact that very large sam- ples are needed to discriminate between the various models generating strong dependence at low frequencies. Hence several competing models of long range dependence have been proposed in the literature. For a covariance stationary process z , long memory of degree d is often defined, as in t Beran(1994)orBaillie(1996),throughthebehaviorofitsspectraldensityf (ω)abouttheorigin: z f (ω) ∼ c ω−2d, as ω → 0+, for some positive c . Since Granger and Joyeux (1980), fractional z f f integration of order d, denoted I(d), has proved the most pervasive example of long memory pro- cesses in econometrics. When d<1, the process is mean reverting (in the sense of Campbell and Mankiw, 1987, that the impulse response function to fundamental innovations converges to zero, see Cheung and Lai, 1993). Moreover, I(d) processes admit a covariance stationary representation whend∈(−1/2,1/2),andarenon-stationaryifd≥1/2.Longrangedependence,orlongmemory, arises when the degree of fractional integration is positive, d > 0. When d ≥ 1/2, the process is nonstationary, yet the spectral density characterization can still be used as the limit of the sample periodogram, see Solo (1992). The prototypical example of an I(d) process is the fractional white noise z = (1−L)−d(cid:15) , where L denotes the lag operator and (cid:15) is a white noise sequence. The t t t class of fractionally integrated processes extends to cases where (cid:15) admits a covariance stationary t ARMA representation. Tothebestofourknowledgefivereasonshavebeenputforwardintheliteraturesofartoexplain the presence of long range dependence: (i) aggregation across heterogeneous series, frequencies or economic agents (Granger 1980, Chambers, 1998, and inter alia Abadir and Talmain, 2002, Zaffaroni, 2004, Lieberman and Phillips, 2008 and Altissimo, Mojon and Zaffaroni, 2009); (ii) linear modeling of a nonlinear underlying process (e.g. Davidson and Sibbertsen, 2005, Miller and Park, 2010); (iii) structural change (Parke, 1999, Diebold and Inoue, 2001, Gouri´eroux and Jasiak, 2001, Perron and Qu, 2007); (iv) learning (bounded rationality) by economic agents in forward looking models of expectations (Chevillon and Mavroeidis, 2013) and (v) network effects (Schennach, 2013). The contribution of this paper is to show that long memory can also be the result of the marginalization of a large dimensional multivariate system, hence caused by hidden dependence acrossvariableswithinasystem. Morespecifically,weprovideanasymptoticparametricframework under which the variables entering an n-dimensional vector autoregressive process of finite order (here a VAR(1)) can be modelled individually as fractional white noises as n→∞. Long memory maythereforebeafeatureofunivariateorlowdimensionalmodelsthatvanisheswhenconsidering larger systems. The source of long memory identified here differs distinctly from the five sources 2 listed above. In particular, it does not rely on any aggregation mechanism `a la Granger (1980). Theintuitionbehindourtheoreticalresultisthefollowing. WeconsiderasimpleVAR(1)model x =A x +(cid:15) , where(A )denotesasequenceofn-dimensionalsquareToeplitzmatrices.1 We t n t−1 t n use the final equation representation of this model (as proposed by Zellner and Palm, 1974, 2004) to derive the spectral density f (ω) of any of the marginalized processes x belonging to x n,xj jt t (for j =1,...,n). To prove that f (ω) converges to the spectral density of a long memory pro- n,xj cess of order δ ∈ (0,1), we introduce three high-level assumptions concerning (A ). Under these n assumptions, fn,xj(ω) is asymptotically equivalent to the ratio of (cid:12)(cid:12)det(cid:0)In−1−An−1e−iω(cid:1)(cid:12)(cid:12)2 over (cid:12)(cid:12)det(cid:0)In−Ane−iω(cid:1)(cid:12)(cid:12)2. WeparameterizeAn bydefiningascalarsequence(δn)withδn ∈(0,1)such that lim δ =δ, and a circulant matrix C such that det(cid:0)I −A e−iω(cid:1)∼det(cid:0)I −C e−iω(cid:1) n→∞ n n n n n n asn→∞.C isassumedtopossessaboutafraction(cid:98)nδ (cid:99)ofuniteigenvalues((cid:98)·(cid:99)denotestheinte- n n gerpart)andn−(cid:98)nδ (cid:99)zeroeigenvalues. Hence,asn→∞,det(cid:0)I −C e−iω(cid:1)∼(cid:0)1−e−iω(cid:1)(cid:98)nδn(cid:99). n n n Wethenusethefirsttheorem ofSzeg¨o(1915)toprovethatunderourthreehigh-levelassumptions, fn,xj(ω)→(cid:12)(cid:12)1−e−iω(cid:12)(cid:12)−2δσ(cid:15)2j for ω (cid:54)=0. We then show that these high level assumptions are satisfied for at least two specific examples of VAR(1) models. In the first parameterization, A denotes a Toeplitz matrix with diagonal n elements converging to δ =1/2 as n→∞, and with vanishing off-diagonal elements. Importantly, the off-diagonal elements decrease at an O(cid:0)n−1(cid:1) rate and the sum of each row equals 1 at all n. We show that as n → ∞, each series x of this system behaves as an ARFIMA(0,1/2,0). jt In the second example, we consider a similar setting but with limiting value δ ∈ (0,1) on the main diagonal of A and with the additional assumption that one innovation (say (cid:15) ) dominates n jt the others in terms of magnitude. In this case, we prove that the dominant series j follows an ARFIMA(0,δ,0) for δ ∈ (0,1). Our results exemplify that vanishing interaction coefficients in a multivariate system can give rise to long memory in individual series. The reason why we refer to this phenomena as “hidden cross-section dependence” is twofold. First,longmemoryappearsthroughthemarginalizationmechanismandthereforeintheunivariate seriesorbyextension,whenestimatingthemodelonasmallsubpartofalargesystem. Thecross- section dependence appearing in the large system is therefore hidden in the univariate models. Second, because the off-diagonal elements of the VAR(1) are so small that, in finite samples, it is likelytobeindistinguishablefromadiagonalVAR(1)onthesolebasisoftheparameterestimates. The hidden dependencies induce modeling issues that were pointed out, inter alia, in Hendry (2009). Ourpapershedssomenewlightonthereasonswhyassetreturnvariancesexhibitlongmemory and in particular why the estimated degree of fractional integration of univariate realized variance 1TheclassofToeplitzmatricesischosenforthetechnicalreasonthatweuseinourproofsthewell-established First Theorem of Szego¨ (1915). We show in the section presenting our analytical results how this assumption can berelaxed. 3 series is generally about 0.4 (the so-called golden-rule of realized volatility, see Andersen et al., 2001 and Lieberman and Phillips, 2008). The presence of long memory in realized variances and its homogeneity across series is therefore likely due to the marginalization of a large system. We illustrate this finding by considering the log(MedRV) of 49 US stocks, where MedRV is a non- parametricrobusttojumpsestimatoroftheintegratedvariance(computedinourcaseon5-minute returns), recently proposed by Andersen, Dobrev, and Schaumburg (2012). The rest of this paper is organized as follows. Section 2 provides our main theoretical results. Section3presentssomeMonteCarlosimulationsandcomparesthemwithsomeempiricalevidences on log(MedRV). Finally, Section 4 concludes. The appendix contains all the proofs. In the paper, we use the following notation. R and C denote the sets of, respectively, real and complexscalars,andR∗ ={x∈R, x(cid:54)=0}.Foranyx∈R,(cid:98)x(cid:99)and(cid:100)x(cid:101)denotethefloorandceiling of x. For z ∈ C, |z| is the modulus of z, z its conjugate, Re(z) and Im(z) its real and imaginary √ parts; we often use the notation i= −1. For any sequences a , b and c of real-valued scalars n n n a = O(b ), b = o(c ), and a ∼ b imply, respectively, that as n → ∞, |a /b | is bounded, n n n n n n n n b /c →0, and a /b →1. For any complex-valued square matrix A, det(A) is the determinant n n n n (cid:48) (cid:16) (cid:48) (cid:17)1/2 of A, tr(A) its trace, A(cid:101) its adjugate matrix, A its conjugate transpose and |A| = tr AA its weak norm. For two sequences (A ) and (B ) of square matrices with bounded maximal n n eigenvalues, A ∼B means that |A −B |→0 as n→∞. 1 is the indicator function which n n n n {·} takes value one if {·} is true and zero otherwise. 2. Theory In this section, we provide an analytical argument that shows that long memory can arise through the marginalization of a multivariate process. We first provide a set-up that introduces high-level assumptions and delineates the analysis that leads to our results. Our theoretical ar- gument draws upon three existing literatures: those of long memory time series processes, Final Equation Representations (FER) of Zellner and Palm (1974), and large dimensional Toeplitz ma- trices (see, e.g., Gray, 2006). In a second part, the section presents two parametric representations where the high-level assumptions are satisfied and long memory arises in the marginalized representation. 2.1. Set-up and main results We first consider a simple model where the n-vector x =(x ,...,x )(cid:48) admits a vector autore- t 1t nt gressive, VAR(1), representation: x =A x +(cid:15) , t=1,...,T (1a) t n t−1 t (cid:15) =((cid:15) ,...,(cid:15) )(cid:48) ∼IID(0,Ω ), (1b) t 1t nt (cid:15) 4 where Ω is a diagonal matrix with diagonal σ2 = (cid:0)σ2 ,...,σ2 (cid:1) such that σ (cid:54)= 0 for j = 1,...,n. (cid:15) (cid:15) (cid:15)1 (cid:15)n (cid:15)j Hence shocks (cid:15) and (cid:15) are independent for j (cid:54)=j(cid:48), j,j(cid:48) =1,...,n. Independence is not necessary jt j(cid:48)t for our results and could be relaxed, but it simplifies the exposition. We provide an extension at the end of the section. Final equation representations (FER) were studied by Zellner and Palm (1974, 2004) to show how the elements of vector processes can be marginalized to yield univariate ARMA representa- tions; see also Cubadda, Hecq and Palm (2009) in the context of factor models and Hecq, Laurent and Palm (2012) for an application to multivariate volatility processes. The FER of model (1) is (cid:94) det(B (L))x =B (L) (cid:15) , (2) n t n t whereB (L)=I −A L,withLthelagoperator. IfA admitsunitaryeigenvalues,weimplicitly n n n n assume that (cid:15) = 0 for t < 0 and x = 0.2 Expression (2) shows that element x , obtained by t 0 jt marginalizing the n-dimensional VAR(1), admits a finite ARMA(n,n−1) representation with a common AR lag polynomial. Hence, as n → ∞, the univariate process x without roots cancel- jt lation follows an ARMA(∞,∞). For clarity of the exposition, consider the following trivariate example: Example. x is a trivariate VAR(1) specified as follows: t        x a b 0 x (cid:15) 1t 1t−1 1t         x = b a b  x + (cid:15) ,  2t    2t−1   2t         x 0 b a x (cid:15) 3t 3t−1 2t (cid:94) where E[(cid:15) (cid:15) ]=0 for j (cid:54)=k. The FER of x is det(B (L))x =B (L) (cid:15) , where jt kt t n t n t 1 (cid:16) (cid:16) √ (cid:17) (cid:17)(cid:16) (cid:16) √ (cid:17) (cid:17) det(B (L))= (1−aL) 1− a− 2b L 1− a+ 2b L , n (a2−2b2)   (1−aL)2−b2L2 bL(1−aL) b2L2 (cid:94)   Bn(L)= bL(1−aL) (1−aL)2 bL(1−aL) .   b2L2 bL(1−aL) (1−aL)2−b2L2 Hence x ∼ ARMA(3,2) for j = 1,2,3 when a (cid:54)= 0 and b (cid:54)= 0, while x ∼ ARMA(2,1) when jt jt a=0 and b(cid:54)=0 and x ∼AR(1) when a(cid:54)=0 and b=0. jt (cid:94) (cid:94) Denote the jth, 1≤j ≤n, row of B (L) by B (L) such that n n j. (cid:94) (cid:104) (cid:94) (cid:94) (cid:94) (cid:105) B (L) = B (L) B (L) ... B (L) , n j. n j1 n j2 n jn 2Inourresultsbelow, weavoiddwellingontheissueoffinitevs. infinitehistory, inrelationtotypeIandtype IIfractionalBrownianmotions,seeMarinucciandRobinson(1999)andDavidsonandHashimzade(2009). Hence, weimplicityconsiderthatthedateofinterest,tisalwayslargerthannsothespectraldensityisdefined. 5 hence x admits the ARMA representation jt (cid:94) det(B (L))x =B (L) (cid:15) n jt n j t n (cid:88) (cid:94) = B (L) (cid:15) . jk kt k=1 Therefore, the spectral density f of x satisfies xj jt fn,xj(ω)=(cid:88)n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)deBtn((cid:94)B(e−(ieω−)ijωk))(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)2σ(cid:15)2k (3) k=1(cid:12) n (cid:12) ∀ω ∈R such that det(cid:0)B (cid:0)e−iω(cid:1)(cid:1)(cid:54)=0. n Expression (3) constitutes the basis of our theoretical argument. In the remainder of the section, we delineate three high-level assumptions which together imply that expression (3) tends, as n→∞, to the spectral density of a fractional white noise. Thefirstassumptionconcernsthesummationontheright-handsideof(3)definingthespectral density of x .3 jt Assumption B. There exists j ∈ N for which the parameters of the VAR(1) model (1) satisfy ∀ω ∈R∗, and as n→∞, (i) (cid:80)n (cid:12)(cid:12)(cid:12) Bn(cid:94)(e−iω)jk (cid:12)(cid:12)(cid:12)2σ2 =(cid:12)(cid:12)(cid:12) Bn(cid:94)(e−iω)jj (cid:12)(cid:12)(cid:12)2σ2 +o(1); k=1(cid:12)det(Bn(e−iω))(cid:12) (cid:15)k (cid:12)det(Bn(e−iω))(cid:12) (cid:15)j (ii) B(cid:94)(e−iω) =det(cid:0)B (cid:0)e−iω(cid:1)(cid:1)+o(1) where σ2 >0. n jj n−1 (cid:15)j UnderAssumptionB(i)thesummationontheright-handsideof(3)reducestoitsjthelement. We consider in the following how this can arise for some specific parametric assumptions for the sequence of matrices A . In Section 2.2.2, we also consider the situation where one innovation (cid:15) n jt dominatestheothersintermsofmagnitude(i.e.,variance). AssumptionB(ii)impliesinparticular that, for ω (cid:54)=0 and as n→∞, (cid:12)(cid:12)det(cid:0)I −A e−iω(cid:1)(cid:12)(cid:12)2 f (ω)=(cid:12) n−1 n−1 (cid:12) σ2 +o(1). (4) n,xj (cid:12) det(I −A e−iω) (cid:12) (cid:15)j (cid:12) n n (cid:12) In (4), the spectral density of one element x with a vector of dimension n is asymptotically ex- jt pressedasaratiooftwodeterminants. Ourmainargumentliesinassumingparametricexpressions forA allowingtheuseofatheoremconcerningratiosofdeterminants. Forthisreason,weassume n that A can be expressed as functions of a sequence of Toeplitz matrices T : n n  t(n) t(n) ··· t(n)  0 1 n−1  t(n) ... ... ...  Tn = −...1 ... ... t(n) .  1  t(n) ··· t(n) t(n) −(n−1) −1 0 3WedenotethisassumptionbyBtoemphasizethatitconcernsthesequenceBn.Wefollowthesamelogicfor thenexttwoassumptions. 6 We make the following assumption about T . n Assumption T. (i)There exists a bounded function g(·,·), defined on (0,1)×{ζ ∈C, |ζ|≤1} which is continuous with respect to its first argument and piecewise continuous with respect to its second argument, and such that ∀(d,z)∈(0,1)×{ζ ∈C, |ζ|≤1}, 1 (cid:90) 2πlog(cid:0)1−g(cid:0)d,eiω(cid:1)z(cid:1)dω =−dlog(1−z); 2π 0 (cid:16) (cid:17) (ii) ∀d∈(0,1), td,k =limn→∞ n1 (cid:80)nj=−01g d,ei2nπj e−2iπjk/n satisfies (cid:80)∞k=−∞|td,k|<∞ so Td,n, the n-dimensional Toeplitz matrix with entries t , belongs to the Wiener class; d,j−i N (iii) There exists a convergent sequence δ ∈(0,1) →δ as n→∞ such that n n−1 t(kn) = n1 (cid:88)g(cid:16)δn,ei2nπj(cid:17)e−2iπjk/n. j=0 For all d ∈ (0,1), the partial function g (·) = g(d,·), such that g (z) = (cid:80)∞ t zk, is Td Td k=−∞ d,k called the symbol of the matrices (T ). The function ω ∈R:ω →g (cid:0)eiω(cid:1) is generally referred d,n Td to as “spectral density” of T but to avoid confusion with the spectral density of the processes d,n x , we do not use this terminology. Yet with a slight abuse of terminology, we refer to g (cid:0)eiω(cid:1) jt Td as the function ω → g(cid:0)d,eiω(cid:1). We discuss the details of this assumption in the proof. Notice though that, under Assumption T, the entries of the sequence (T ) do not depend on n; only d,n the dimension of T does. The identity matrix I is also Toeplitz with symbol g (·)=1, hence d,n n I for |z|<1 the matrix I −T z is Toeplitz with symbol 1−g (·)z. n n Td Assumption T ensures that the First Theorem of Szeg¨o (1915) holds for I −T z. It states n d,n that as n→∞, det(In−1−Td,n−1z) →exp(cid:26) 1 (cid:90) 2πlog(cid:0)1−g (cid:0)eiω(cid:1)z(cid:1)dω(cid:27). (5) det(I −T z) 2π Td n d,n 0 Under Assumption T(i) the limit above equals (1−z)−d. We make one last high-level assumption, concerning now the parameters of the VAR(1) and how they relate to the Toeplitz structure defined above. Assumption A. There exists a sequence of real matrices (A ) such that for ω ∈ R∗ and as n n→∞, det(cid:0)I −A e−iω(cid:1) det(cid:0)I −T e−iω(cid:1) n−1 n−1 n−1 n−1 ∼ , det(I −A e−iω) det(I −T e−iω) n n n n where T is defined in Assumption T. n The three high-level assumptions (i.e., Assumptions B, T and A) allow to prove the following theorem. 7 Theorem 1. Let the real vector x of dimension n be defined by the VAR(1) model (1). Under t Assumptions B, T and A, there exist j ∈N and δ ∈(0,1) such that the spectral density of element x satisfies, for ω ∈R∗ and as n→∞, jt fn,xj(ω)→(cid:12)(cid:12)1−e−iω(cid:12)(cid:12)−2δσ(cid:15)2j. The convergence is uniform when ω is restricted to closed subsets of R∗. In Theorem 1, the spectral density of the marginalized univariate process x tends to that of jt anI(δ)fractionalBrownianmotionasthedimensionofthesystemn→∞. Henceindividualseries may asymptotically (as the cross-section dimension increases) exhibit long memory although the vector process itself does not. In the theorem, the asymptotic spectral density only depends on one innovation through its variance σ2 , the others do not matter (and we show below an example (cid:15)j where they disappear, i.e., σ2 →0 for k (cid:54)=j). Hence this is distinctly different from the example (cid:15)k of Granger (1980, Section 4) where long memory arises in a vector process from the aggregation of moving averages; in fact our Assumption B(i) precludes it. The set-up above can be generalized easily to transformations A →V A V−1, where V is n n n n n a non-singular n-dimensional matrix. Indeed, det(cid:0)I −V A V−1z(cid:1)=det(cid:2)V (I −A z)V−1(cid:3)=det(I −A z) n n n n n n n n n n (cid:94) and the adjugate of I −V A V−1z is V (I −A z)V−1. Hence Assumptions A and B apply n n n n n n n n to V A V−1 if they hold for A . It follows that expression (1b) can be relaxed to cases where n n n n Ω is non-diagonal yet positive definite, by choosing for V the matrix containing its orthonormal (cid:15) n eigenvectors. Also V A V−1 is not necessarily Toeplitz so our result is quite general. n n n In the following subsection, we provide examples of primitive conditions to impose on the parameters of the VAR(1) model for Theorem 1 to hold. 2.2. Two examples Here we provide two parametric examples of models satisfying Assumptions B, T and A. The first example is symmetric in the sense that all processes entering the VAR are defined by the same dynamics (i.e., the results are invariant by rotations of A ). This example stresses the fact n that asymmetry, or heterogeneity, is not necessary for long memory to arise. The second example presents a heterogenous case where the results are not symmetric for all x . jt 2.2.1. A symmetric example The sequence of n-dimensional matrices A is defined as n A =T∗ +η D , (6) n n n n where T∗ is specified below; η is a real scalar sequence that satisfies η = o(cid:0)n−1(cid:1), and D is a n n n n real antisymmetric Toeplitz matrix with absolutely summable entries and zero diagonal elements. 8 To define T∗, we first consider T defined as in Assumption T. We choose function g(·,·) such n n that, for ω ≥0, g(cid:0)d,eiω(cid:1)=1 +1 , ω =u mod 2π, (7) {0≤u<πd} {π(3−d)<u≤3π} 2 2 andω →g(cid:0)d,eiω(cid:1)iseven. IntheproofofTheorem1,weshowthatT isasymptoticallyequivalent n to a circulant matrix C defined as n  c(n) c(n) ··· c(n)  0 1 n−1  c(n) ... ... ...  Cn = n...−1 ... ... c(n) ,  1  c(n) ··· c(n) c(n) 1 n−1 0 wherec(n) =t(n)+t(n) fork (cid:54)=0andc(n) =t(n).Eigenvaluesofcirculantmatricescanbeexpressed k −k n−k 0 0 in terms of the associated symbol evaluated at the Fourier ordinates. Here C is chosen so that as n n→∞,c(n) ∼t +t ,whichdefinesacirculantmatrixwitheigenvaluesg(cid:0)δ ,e2iπj/n(cid:1)for k δn,−k δn,n−k n j =0,...,n−1, i.e., with about (cid:98)nδ (cid:99) unit eigenvalues (the exact number is nt(n)) and n−(cid:98)nδ (cid:99) n 0 n zero eigenvalues. Expression(7)definesanevenandreal-valuedfunctionω →g(cid:0)d,eiω(cid:1)sot(n)+t(n) =t(n)+t(n) −k n−k k −k and the entries c(n) are real. Hence C is also asymptotically equivalent to the matrix T∗ ≡ k n n (cid:16) (cid:17) Re(T ) with entries t∗(n) =Re t(n) . Hence n k k det(cid:0)I −T∗ z(cid:1) det(I −C z) n−1 n−1 ∼ n−1 n−1 . (8) det(I −T∗z) det(I −C z) n n n n The matrix T∗ satisfies the following properties: n t∗(n) =δ +O(cid:0)n−1(cid:1), 0 n t∗(n) =O(cid:0)n−1(cid:1), k (cid:54)=0, k where we specify δ = 1 +o(cid:0)n−2(cid:1). (9) n 2 As n → ∞, the off-diagonal entries of T∗ individually tend to zero. Yet the convergence is slow n enough to ensure that for all n, T∗ is different enough from a diagonal matrix. Indeed the off- n diagonal elements of each row admit a nonzero sum: n−1 (cid:88)t∗(n)=1−g(δ ,0)=1−δ +O(cid:0)n−1(cid:1). (10) k n n k=1 To understand further the behavior of the sequence (T∗), consider the limiting coefficients t∗ = n d,k Re(t ), where t is defined by Assumption T: d,k d,k πkd(cid:34) πk πk(cid:0)1 −d(cid:1) πk πk(cid:0)1 −d(cid:1)(cid:35) t∗ =dsin 1 sin sin 2 +1 cos cos 2 , (11) d,k c 2 {kodd} 4 2 {keven} 4 2 9

Description:
bDepartment of Quantitative Economics, Maastricht University, The Netherlands More specifically, we provide an asymptotic parametric framework . assumptions are satisfied and long memory arises in the marginalized
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.