ebook img

ARMA Models PDF

19 Pages·2006·0.17 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ARMA Models

Lecture 2: ARMA Models∗ 1 ARMA Process As we have remarked, dependence is very common in time series observations. To model this time series dependence, we start with univariate ARMA models. To motivate the model, basically we can track two lines of thinking. First, for a series x , we can model that the level of its current t observations depends on the level of its lagged observations. For example, if we observe a high GDP realization this quarter, we would expect that the GDP in the next few quarters are good as well. This way of thinking can be represented by an AR model. The AR(1) (autoregressive of order one) can be written as: x = φx +(cid:15) t t−1 t where (cid:15) ∼ WN(0,σ2) and we keep this assumption through this lecture. Similarly, AR(p) (au- t (cid:15) toregressive of order p) can be written as: x = φ x +φ x +...+φ x +(cid:15) . t 1 t−1 2 t−2 p t−p t In a second way of thinking, we can model that the observations of a random variable at time t are not only affected by the shock at time t, but also the shocks that have taken place before time t. For example, if we observe a negative shock to the economy, say, a catastrophic earthquake, then we would expect that this negative effect affects the economy not only for the time it takes place, but also for the near future. This kind of thinking can be represented by an MA model. The MA(1) (moving average of order one) and MA(q) (moving average of order q) can be written as x = (cid:15) +θ(cid:15) t t t−1 and x = (cid:15) +θ (cid:15) +...+θ (cid:15) . t t 1 t−1 q t−q If we combine these two models, we get a general ARMA(p,q) model, x = φ x +φ x +...+φ x +(cid:15) +θ (cid:15) +...+θ (cid:15) . t 1 t−1 2 t−2 p t−p t 1 t−1 q t−q ARMA model provides one of the basic tools in time series modeling. In the next few sections, we will discuss how to draw inferences using a univariate ARMA model. ∗Copyright 2002-2006 by Ling Hu. 1 2 Lag Operators Lag operators enable us to present an ARMA in a much concise way. Applying lag operator (denoted L) once, we move the index back one time unit; and applying it k times, we move the index back k units. Lx = x t t−1 L2x = x t t−2 . . . Lkx = x t t−k The lag operator is distributive over the addition operator, i.e. L(x +y ) = x +y t t t−1 t−1 Using lag operators, we can rewrite the ARMA models as: AR(1) : (1−φL)x = (cid:15) t t AR(p) : (1−φ L−φ L2−...−φ Lp)x = (cid:15) 1 2 p t t MA(1) : x = (1+θL)(cid:15) t t MA(q) : x = (1+θ L+θ L2+...+θ Lq)(cid:15) t 1 2 q t Let φ = 1,θ = 1 and define log polynomials 0 0 φ(L) = 1−φ L−φ L2−...−φ Lp 1 2 p θ(L) = 1+θ L+θ L2+...+θ Lq 1 2 p With lag polynomials, we can rewrite an ARMA process in a more compact way: AR : φ(L)x = (cid:15) t t MA : x = θ(L)(cid:15) t t ARMA : φ(L)x = θ(L)(cid:15) t t 3 Invertibility Given a time series probability model, usually we can find multiple ways to represent it. Which representation to choose depends on our problem. For example, to study the impulse-response functions (section 4), MA representations maybe more convenient; while to estimate an ARMA model, AR representations maybe more convenient as usually x is observable while (cid:15) is not. t t However, not all ARMA processes can be inverted. In this section, we will consider under what conditionscanweinvertanARmodeltoanMAmodelandinvertanMAmodeltoanARmodel. It turns out that invertibility, which means that the process can be inverted, is an important property of the model. If we let 1 denotes the identity operator, i.e., 1y = y , then the inversion operator (1−φL)−1 t t is defined to be the operator so that (1−φL)−1(1−φL) = 1 2 For the AR(1) process, if we premulitply (1−φL)−1 to both sides of the equation, we get x = (1−φL)−1(cid:15) t t Is there any explicit way to rewrite (1−φL)−1? Yes, and the answer just turns out to be θ(L) with θ = φk for |φ| < 1. To show this, k (1−φL)θ(L) = (1−φL)(1+θ L+θ L2+...) 1 2 = (1−φL)(1+φL+φ2L2+...) = 1−φL+φL−φ2L2+φ2L2−φ3L3+... = 1− lim φkLk k→∞ = 1 for |φ| < 1 We can also verify this result by recursive substitution, x = φx +(cid:15) t t−1 t = φ2x +(cid:15) +φ(cid:15) t−2 t t−1 . . . = φkx +(cid:15) +φ(cid:15) +...+φk−1(cid:15) t−k t t−1 t−k+1 k−1 X = φkx + φj(cid:15) t−k t−j j=0 With |φ| < 1, we have that lim φkx = 0, so again, we get the moving average representation k→∞ t−k with MA coefficient equal to φk. So the condition that |φ| < 1 enables us to invert an AR(1) process to an MA(∞) process, AR(1) : (1−φL)x = (cid:15) t t MA(∞) : x = θ(L)(cid:15) with θ = φk t t k We have got some nice results in inverting an AR(1) process to a MA(∞) process. Then, how to invert a general AR(p) process? We need to factorize a lag polynomial and then make use of the result that (1−φL)−1 = θ(L). For example, let p = 2, we have (1−φ L−φ L2)x = (cid:15) (1) 1 2 t t To factorize this polynomial, we need to find roots λ and λ such that 1 2 (1−φ L−φ L2) = (1−λ L)(1−λ L) 1 2 1 2 Given that both |λ | < 1 and |λ | < 1 (or when they are complex number, they lie within the 1 2 unit circle. Keep this in mind as I may not mention this again in the remaining of the lecture), we could write (1−λ L)−1 = θ (L) 1 1 (1−λ L)−1 = θ (L) 2 2 3 and so to invert (1), we have x = (1−λ L)−1(1−λ L)−1(cid:15) t 1 2 t = θ (L)θ (L)(cid:15) 1 2 t Solving θ (L)θ (L) is straightforward, 1 2 θ (L)θ (L) = (1+λ L+λ2L2+...)(1+λ L+λ2L2+...) 1 2 1 1 2 2 = 1+(λ +λ )L+(λ2+λ λ +λ2)L2+... 1 2 1 1 2 2 ∞ k = X(Xλjλk−j)Lk 1 2 k=0 j=0 = ψ(L), say, with ψ = Pk λjλk−j. Similarly, we can also invert the general AR(p) process given that all k j=0 1 2 roots λ has less than one absolute value. An alternative way to represent this MA process (to i express ψ) is to make use of partial fractions. Let c ,c be two constants, and their values are 1 2 determined by 1 c c c (1−λ L)+c (1−λ L) 1 2 1 2 2 1 = + = (1−λ L)(1−λ L) 1−λ L 1−λ L (1−λ L)(1−λ L) 1 2 1 2 1 2 We must have 1 = c (1−λ L)+c (1−λ L) 1 2 2 1 = (c +c )−(c λ +c λ )L 1 2 1 2 2 1 which gives c +c = 1 and c λ +c λ = 0. 1 2 1 2 2 1 Solving these two equations we get λ λ 1 2 c = , c = . 1 2 λ −λ λ −λ 1 2 2 1 Then we can express x as t x = [(1−λ L)(1−λ L)]−1(cid:15) t 1 2 t = c (1−λ L)−1(cid:15) +c (1−λ L)−1(cid:15) 1 1 t 2 2 t ∞ ∞ X X = c λk(cid:15) +c λk(cid:15) 1 1 t−k 2 2 t−k k=0 k=0 ∞ X = ψ (cid:15) k t−k k=0 where ψ = c λk +c λk. k 1 1 2 2 4 Similarly, an MA process, x = θ(L)(cid:15) , t t is invertible if θ(L)−1 exists. An MA(1) process is invertible if |θ| < 1, and an MA(q) process is invertible if all roots of 1+θ z+θ z2+...θ zq = 0 1 2 q lie outside of the unit circle. Note that for any invertible MA process, we can find a noninvertible MA process which is the same as the invertible process up to the second moment. The converse is also true. We will give an example in section 5. Finally, given an invertible ARMA(p,q) process, φ(L)x = θ(L)(cid:15) t t x = φ−1(L)θ(L)(cid:15) t t x = ψ(L)(cid:15) t t then what is the series ψ ? Note that since k φ−1(L)θ(L)(cid:15) = ψ(L)(cid:15) , t t we have θ(L) = φ(L)ψ(L). So the elements of ψ can be computed recursively by equating the coefficients of Lk. Example 1 For a ARMA(1,1) process, we have 1+θL = (1−φL)(ψ +ψ L+ψ L2+...) 0 1 2 = ψ +(ψ −φψ )L+(ψ −φψ )L2+... 0 1 0 2 1 Matching coefficients on Lk, we get 1 = ψ 0 θ = ψ −φψ 1 0 0 = ψ −φψ for j ≥ 2 j j−1 Solving those equation, we can easily get ψ = 1 0 ψ = φ+θ 1 ψ = φj−1(φ+θ) for j ≥ 2 j 4 Impulse-Response Functions Given an ARMA model, φ(L)x = θ(L)(cid:15) , it is natural to ask: what is the effect on x given a unit t t t shock at time s (for s < t)? 5 4.1 MA process For an MA(1) process, x = (cid:15) +θ(cid:15) t t t−1 the effects of (cid:15) on x are: (cid:15) : 0 1 0 0 0 x : 0 1 θ 0 0 For a MA(q) process, x = (cid:15) +θ (cid:15) +θ (cid:15) +...+θ (cid:15) , t t 1 t−1 2 t−2 q t−q the effects on (cid:15) on x are: (cid:15) : 0 1 0 0 ... 0 0 x : 0 1 θ θ ... θ 0 1 2 q The left figure in Figure 1 plots the impulse-response function of an MA(3) process. Similarly, we can write down the effects for an MA(∞) process. As you can see, we can get impulse-response function immediately from an MA process. 4.2 AR process For a AR(1) process x = φx +(cid:15) with |φ| < 1, we can invert it to a MA process and the effects t t−1 t of (cid:15) on x are: (cid:15) : 0 1 0 0 ... x : 0 1 φ φ2 ... Ascanbeseenfromabove, theimpulse-responsedynamicsisquite clearfromaMArepresentation. For example, let t > s > 0, given one unit increase in (cid:15) , the effect on x would be φt−s, if there s t are no other shocks. If there are shocks that take place at time other than s and has nonzero effect on x , then we can add these effects, since this is a linear model. t The dynamics is a bit complicated for higher order AR process. But applying our old trick of inverting them to a MA process, then the following analysis will be straightforward. Take an AR(2) process as example. Example 2 x = 0.6x +0.2x +(cid:15) t t−1 t−2 t or (1−0.6L−0.2L2)x = (cid:15) t t We first solve the polynomial: y2+3y−5 = 0 and get two roots1 y = 1.2926 and y = −4.1925. Recall that λ = 1/y = 0.84 and λ = 1/y = 1 2 1 1 2 2 −0.24. So we can factorize the lag polynomial to be: (1−0.6L−0.2L2)x = (1−0.84L)(1+0.24L)x t t x = (1−0.84L)−1(1+0.24L)−1(cid:15) t t = ψ(L)(cid:15) t √ 1Recall that the roots for polynomial ay2+by+c=0 is −b± b2−4ac. 2a 6 where ψ = Pk λjλk−j. In this example, the series of ψ is {1,0.6, 0.5616,0.4579, 0.3880,...}. So k j=0 1 2 the effects of (cid:15) on x can be described as: (cid:15) : 0 1 0 0 0 ... x : 0 1 0.6 0.5616 0.4579 ... The right figure in Figure 1 plots this impulse-response function. So after we invert an AR(p) process to an MA process, given t > s > 0, the effect of one unit increase in (cid:15) on x is just ψ . s t t−s We can see that given a linear process, AR or ARMA, if we could represent them as a MA process, we will find impulse-response dynamics immediately. In fact, MA representation is the same thing as the impulse-response function. 1.5 1.5 1 1 0.5 0.5 Response Response 0 0 −0.5 −0.5 −1 −1 0 10 20 30 0 10 20 30 Time Time Figure 1: The impulse-response functions of an MA(3) process (θ = 0.6,θ = −0.5,θ = 0.4) and 1 2 3 an AR(2) process (φ = 0.6,φ = 0.2), with unit shock at time zero 1 2 5 Autocovariance Functions and Stationarity of ARMA models 5.1 MA(1) x = (cid:15) +θ(cid:15) , t t t−1 where (cid:15) ∼ WN(0,σ2). It is easy to calculate the first two moments of x : t (cid:15) t E(x ) = E((cid:15) +θ(cid:15) ) = 0 t t t−1 E(x2) = (1+θ2)σ2 t (cid:15) and γ (t,t+h) = E[((cid:15) +θ(cid:15) )((cid:15) +θ(cid:15) )] x t t−1 t+h t+h−1 (cid:26) θσ2 for h = 1 = (cid:15) 0 for h > 1 7 So, for a MA(1) process, we have a fixed mean and a covariance function which does not depend on time t: γ(0) = (1+θ2)σ2, γ(1) = θσ2, and γ(h) = 0 for h > 1. So we know MA(1) is stationary (cid:15) (cid:15) given any finite value of θ. The autocorrelation can be computed as ρ (h) = γ (h)/γ (0), so x x x θ ρ (0) = 1, ρ (1) = , ρ (h) = 0 for h > 1 x x 1+θ2 x We have proposed in the section on invertability that for an invertible (noninvertible) MA process, there always exists a noninvertible (invertible) process which is the same as the original process up to the second moment. We use the following MA(1) process as an example. Example 3 The process x = (cid:15) +θ(cid:15) , (cid:15) ∼ WN(0,σ2) |θ| > 1 t t t−1 t is noninvertible. Consider an invertible MA process defined as x˜ = (cid:15)˜ +1/θ(cid:15)˜ , (cid:15)˜ ∼ WN(0,θ2σ2) t t t−1 t . Then we can compute that E(x ) = E(x˜ ) = 0, E(x2) = E(x˜2) = (1+θ2)σ2, γ (1) = γ (1) = t t t t x x˜ θσ2, and γ (h) = γ (h) = 0 for h > 1. Therefore, these two processes are equivalent up to the x x˜ second moments. To be more concrete, we plug in some numbers. Let θ = 2, and we know that the process x = (cid:15) +2(cid:15) , (cid:15) ∼ WN(0,1) t t t−1 t is noninvertible. Consider the invertible process x˜ = (cid:15)˜ +(1/2)(cid:15)˜ , (cid:15)˜ ∼ WN(0,4) t t t−1 t . Note that E(x ) = E(x˜ ) = 0, E(x2) = E(x˜ )2 = 5, γ (1) = γ (1) = 2, and γ (h) = γ (h) = 0 t t t t x x˜ x x˜ for h > 1. Although these two representations, noninvertible MA and invertible MA, could generate the same process up to the second moment, we prefer the invertible presentations in practice because if we can invert an MA process to an AR process, we can find the value of (cid:15) (non-observable) based t on all past values of x (observable). If a process is noninvertible, then, in order to find the value of (cid:15) , we have to know all future values of x. t 5.2 MA(q) q X x = θ(L)(cid:15) = (θ Lk)(cid:15) t t k t k=0 8 The first two moments are: E(x ) = 0 t q X E(x2) = θ2σ2 t k (cid:15) k=0 and (cid:26) Pq−hθ θ σ2 for h = 1,2,...,q γ (h) = k=0 k k+h (cid:15) x 0 for h > q Again, a MA(q) is stationary for any finite values of θ ,...,θ . 1 q 5.3 MA(∞) ∞ X x = θ(L)(cid:15) = (θ Lk)(cid:15) t t k t k=0 Before we compute moments and discuss the stationarity of x , we should first make sure that t {x } converges. t Proposition 1 If {(cid:15) } is a sequence of white noise with σ2 < ∞, and if P∞ θ2 < ∞, then the t (cid:15) k=0 k series ∞ X x = θ(L)(cid:15) = θ (cid:15) t t k t−k k=0 converges in mean square. Proof (See Appendix 3.A. in Hamilton): Recall the Cauchy criterion: a sequence {y } converges in n mean square if and only if ky −y k → 0 as n,m → ∞. In this problem, for n > m > 0, we want n m to show that " n m #2 X X E θ (cid:15) − θ (cid:15) k t−k k t−k k=1 k=1 X = θ2σ2 k (cid:15) m≤k≤n " n m # X X = θ2 − θ2 σ2 k k (cid:15) k=0 k=0 → 0 as m,n → ∞ The result holds since {θ } is square summable. It is often more convenient to work with a k slightly stronger condition – absolutely summability: ∞ X |θ | < ∞. k k=0 9 It is easy to show that absolutely summable implies square summable. A MA(∞) process with absolutely summable coefficients is stationary with moments: E(x ) = 0 t ∞ X E(x2) = θ2σ2 t k (cid:15) k=0 ∞ X γ (h) = θ θ σ2 x k k+h (cid:15) k=0 5.4 AR(1) (1−φL)x = (cid:15) (2) t t Recall that an AR(1) process with |φ| < 1 can be inverted to an MA(∞) process x = θ(L)(cid:15) with θ = φk. t t k With |φ| < 1, it is easy to check that the absolute summability holds: ∞ ∞ X X |θ | = |φk| < ∞. k k=0 k=0 Using the results for MA(∞), the moments for x in (2) can be computed: t E(x ) = 0 t ∞ X E(x2) = φ2kσ2 t (cid:15) k=0 = σ2/(1−φ2) (cid:15) ∞ X γ (h) = φ2k+hσ2 x (cid:15) k=0 = φhσ2/(1−φ2) (cid:15) So, an AR(1) process with |φ| < 1 is stationary. 5.5 AR(p) Recall that an AR(p) process (1−φ L−φ L2−...−φ Lp)x = (cid:15) 1 2 p t t can be inverted to an MA process x = θ(L)(cid:15) if all λ in t t i (1−φ L−φ L2−...−φ Lp) = (1−λ L)(1−λ L)...(1−λ L) (3) 1 2 p 1 2 p have less than one absolute value. It also turns out that with |λ | < 1, the absolute summability i P∞ |ψ | < ∞ is also satisfied. (The proof can be found on page 770 of Hamilton and the proof k=0 k uses the result that ψ = c λk +c λk.) k 1 1 2 2 10

Description:
Lecture 2: ARMA Models. ∗. 1 ARMA Process. As we have remarked, dependence is very common in time series observations. To model this time.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.