ebook img

Time Series Analysis and Its Applications (Instructor's Solution Manual) (Solutions) PDF

81 Pages·2017·0.842 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Time Series Analysis and Its Applications (Instructor's Solution Manual) (Solutions)

Time Series Analysis and Its Applications (cid:81) 4th Edition (cid:83) Instructor’s Manual © 2017,R.H.ShumwayandD.S.Stoffer Weassumethatthepackageforthetext,astsa,hasbeenloaded.SeethePackageNotesattheweb pageforthetext:http://www.stat.pitt.edu/stoffer/tsa4/ (cid:33) (cid:33) Please Do Not Reproduce Chapter 1 Solutions 1.1 Themajordifferencesarehowquicklythesignaldiesoutintheexplosionversustheearthquakeandthe largeramplitudeofthesignalsintheexplosion. plot(EQ5, ylab="Earthquate/Explosion") lines(EXP6, col=2) legend("topright", c("EQ","EXP"), col=1:2, lty=1) 1.2 Figure 1.1 shows contrived data simulated according to this model. The modulating functions are also plotted.Thecodeisgivenforpart(b);part(a)isgiveninthetext.Forpart(c),basicallyremovethecos() part. s = c(rep(0,100), 10*exp(-(1:100)/200)*cos(2*pi*1:100/4)) # part (b) x = ts(s + rnorm(200, 0, 1)) plot(x) lines(c(rep(0,100), 10*exp(-(1:100)/200))) # modulator on same plot as the series The first signal bears a striking resemblance to the two arrival phases in the explosion. The second signal decays more slowly and looks more like the earthquake. The periodic behavior is emulated by the cosine functionwhichwillmakeonecycleeveryfourpoints.Ifweassumethatthedataaresampledat4pointsper second,thedatawillmake1cycleinasecond,whichisaboutthesamerateastheseismicseries. 1.3 BelowisRcodeforparts(a)-(c).Inallcasesthemovingaveragenearlyannihilates(completelyinthe2nd case)thesignal.Thesignalsinpart(a)and(c)aresimilar. w = rnorm(150,0,1) # 50 extra to avoid startup problems x = filter(w, filter=c(0,-.9), method="recursive")[-(1:50)] # AR x2 = 2*cos(2*pi*(1:100)/4) # sinusoid x3 = x2 + rnorm(100,0,1) # sinusoid + noise v = filter(x, rep(1,4)/4) # moving average v2 = filter(x2, rep(1,4)/4) # moving average v3 = filter(x3, rep(1,4)/4) # moving average par(mfrow=c(3,1)) plot.ts(x, main="autoregression") lines(v,lty="dashed") plot.ts(x2, main="sinusoid") lines(v2,lty="dashed") plot.ts(x3, main="sinusoid + noise") lines(v3,lty="dashed") 1.4 Simplyexpandthebinomialproductinsidetheexpectationandusethefactthatµ isanonrandomconstant, t i.e., γ(s,t)=E[(x x −µ x −x µ +µ µ ] s t s t s t s t =E(x x )−µ E(x )−E(x )µ +µ µ s t s t s t s t 2 1 Chapter1Solutions Series (a) Modulator (a) 10 1 0.8 5 0.6 0 0.4 −5 0.2 −10 0 0 50 100 150 200 0 50 100 150 200 Series (b) Modulator (b) 15 1 10 0.9 0.8 5 0.7 0 0.6 −5 0.5 −10 0.4 −15 0.3 0 50 100 150 200 0 50 100 150 200 Fig.1.1.Simulatedserieswithexponentialmodulations =E(x x )−µ µ −µ µ +µ µ s t s t s t s t 1.5 (a) In each case the signals are fixed, so E(x ) = E(s +w ) = s +E(w ) = s . To plot the means, t t t t t t repeatProblem1.2andjustplotthesignals(s)withoutthenoise. (b) Theautocovariancefunctionγ(t,u)=E[(x −s )(x −s )=E(w w ),whichisone(1)whent=u t t u u t u andzero(0)otherwise. 1.6 (a) SinceEx =β +β t,themeanisnotconstant,sotheprocessisnotstationary.Notethat t 1 2 x −x =β +β t+w −β −β (t−1)−w t t−1 1 2 t 1 2 t−1 =β +w −w , 2 t t−1 whichisclearlystationary.Verifythatthemeanisβ andtheautocovarianceis2fors = 2and−1for 2 |s−t|=1andiszerofor|s−t|>1. (b) First,write q 1 (cid:88) E(y )= [(β +β (t−j)] t 2q+1 1 2 j=−q (cid:20) q (cid:21) 1 (cid:88) = (2q+1)(β +β t)−β j 2q+1 1 2 2 j=−q =β +β t 1 2 becausethepositiveandnegativetermsinthelastsumcancelout.Togetthecovariancewritetheprocess as ∞ (cid:88) y = a w , t j t−j j=−∞ wherea =1,j =−q,...,0,...,qandiszerootherwise.Togetthecovariance,notethatweneed j 1 Chapter1Solutions 3 γ (h)=E[(y −Ey )(y −Ey )] y t+h t+h t t (cid:88)(cid:88) =(2q+1)−2 a a Ew w j k t+h−j t−k j k σ2 (cid:88) = a a δ , (2q+1)2 j k h+k−j j,k ∞ (cid:88) = a a , j+h j j=−∞ whereδ =1,j =k+handiszerootherwise.Writingoutthetermsinγ (h),forh=0,±1,±2,..., h+k−j y weobtain σ2(2q+1−|h|) γ (h)= y (2q+1)2 forh=0,±1,±2,...,±2qandzerofor|h|>q. 1.7 ByacomputationanalogoustothatappearinginExample1.17,wemayobtain  6σ2 h=0 4σw2 h=±1 γ(h)= w σ2 h=±2 0w |h|>2. Theautocorrelationisobtainedbydividingtheautocovariancesbyγ(0)=6σ2. w 1.8 (a) Simplysubstituteδs+(cid:80)s w forx toseethat k=1 k s t t−1 (cid:88) (cid:16) (cid:88) (cid:17) δt+ w =δ+ δ(t−1)+ w +w . k k t k=1 k=1 (cid:124) (cid:123)(cid:122) (cid:125) (cid:124) (cid:123)(cid:122) (cid:125) xt xt−1 Alternately,theresultcanbeshownbyinduction. (b) Forthemean, (cid:32) t (cid:33) t (cid:88) (cid:88) Ex =E δt+ w =δt+ Ew =δt. t k k k=1 k=1 Forthecovariance,withoutlossofgenerality,considerthecases≤t. γ(s,t)=cov(x ,x )=E{(x −δs)(x −δt)} s t s t (cid:26) s t (cid:27) (cid:88) (cid:88) =E w w j k j=1 k=1 (cid:26) (cid:27) =E (w +···+w )(w +···+w +w +...+w ) 1 s 1 s s+1 t s (cid:88) = E(w2)=sσ2. [or min(s,t)σ2] j w w j=1 (c) Theseriesisnonstationarybecauseboththemeanfunctionandtheautocovariancefunctiondependon time,t. (d) From(b),ρ (t−1,t) = (t−1)σ2/(cid:112)(t−1)σ2(cid:112)tσ2,whichyieldstheresult.Theimplicationisthat x w w w theseriestendstochangeslowly. 4 1 Chapter1Solutions (e) One possibility is to note that ∇x = x −x = δ+w , which is stationary because µ = δ and t t t−1 t x,t γ (t+h,t)=σ2δ (h)arebothindependentoftimet,whereδ (h)isthedeltameasure. x w 0 0 1.9 BecauseE(U )=E(U )=0,wehaveE(x )=0.Then, 1 2 t γ(h)=E(x x ) t+h t (cid:26) (cid:27) (cid:0) (cid:1)(cid:0) (cid:1) =E U sin[2πω (t+h)]+U cos[2πω (t+h)] U sin[2πω t]+U cos[2πω t] 1 0 2 0 1 0 2 0 (cid:18) (cid:19) =σ2 sin[2πω (t+h)]sin[2πω t]+cos[2πω (t+h)]cos[2πω t] w 0 0 0 0 =σ2 cos[2πω (t+h)−2πω t] w 0 0 =σ2 cos[2πω h] w 0 bythestandardtrigonometricidentity,cos(A−B)=sinAsinB+cosAcosB. 1.10 (a) (cid:26) (cid:27) MSE(A)=E x2 −2AE(x x )+A2E(x2) =γ(0)−2Aγ((cid:96))+A2γ(0). t+(cid:96) t+(cid:96) t t SettingthederivativewithrespecttoAtozeroyields −2γ((cid:96))+2Aγ(0)=0 andsolvinggivestherequiredvalue. (b) (cid:20) (cid:21) (cid:20) (cid:21) (cid:20) (cid:21) ρ((cid:96))γ((cid:96)) MSE(A)=γ(0) 1−2 +ρ2((cid:96)) =γ(0) 1−2ρ2((cid:96))+ρ2((cid:96)) =γ(0) 1−ρ2((cid:96)) . γ(0) (c) Ifx =Ax withprobabilityone,then t+(cid:96) t (cid:20) (cid:21) E(x −Ax )2 =γ(0) 1−ρ2((cid:96)) =0 t+(cid:96) t implyingthatρ((cid:96))=±1.SinceA=ρ((cid:96)),theconclusionfollows. 1.11 (a) Sincex =(cid:80)∞ ψ w , t j=−∞ j t−j ∞ ∞ ∞ (cid:88) (cid:88) (cid:88) (cid:88) γ(h)=E ψ w w ψ =σ2 ψ ψ δ =σ2 ψ ψ , j t+h−j t−k k w j k h−j+k w k+h k j=−∞k=−∞ j,k k=−∞ whereδ =1fort=0andiszerootherwise. t (b) TheproofisidenticaltotheonegiveninAppendixA,ExampleA.2. 1.12 γ (h)=E[(x −µ )(y −µ )]=E[(y −µ )(x −µ )]=γ (−h) xy t+h x t y t y t+h x yx 1.13 (a)  σ2(1+θ2)+σ2 h=0  w u γ (h)= −θσ2 h=±1 y w 0 |h|>1. 1 Chapter1Solutions 5 (b) (cid:40)σ2 h=0 w γxy(h)= −θσw2 h=−1 0 otherwise. (cid:26) σ2 h=0 γ (h)= w x 0 otherwise. γ (h) ρ (h)= xy . xy (cid:112) γ (0)γ (0) x y (c) The processes are jointly stationary because the autocovariance and cross-covariance functions depend onlyonlagh. 1.14 (a) Forthemean,write (cid:26) (cid:27) 1 E(y )=E(exp{x })=exp µ + γ (0) , t t x 2 x usingthegivenequationatλ=1. (b) Fortheautocovariancefunction,notethat (cid:0) (cid:1) E(y y )=E exp{x }exp{x } t+h t t+h t (cid:0) (cid:1) =E exp{x +x } t+h t =exp{2µ +γ (0)+γ (h)}, x x x sincex +x isthesumoftwocorrelatednormalrandomvariablesandwillbenormallydistributed t t+h withmean2µ andvariance x (cid:0) (cid:1) γ (0)+γ (0)+2γ (h)=2 γ (0)+γ(h) . x x x x Fortheautocovarianceofy , t γ (h)=E(y y )−E(y )E(y ) y t+h t t+h t (cid:0) (cid:8) 1 (cid:9)(cid:1)2 =exp{2µ +γ (0)+γ (h)}− exp µ + γ (0) x x x x 2 x (cid:0) (cid:1) =exp{2µ +γ (0)} exp{γ (h)}−1 . x x x 1.15 Theprocessisstationarybecause µ =E(x )=E(w w )=E(w )E(w )=0; x,t t t t−1 t t−1 γ (0)=E(w w w w )=E(w2)E(w2 )=σ2σ2 =σ4, x t t−1 t t−1 t t−1 w w w γ (1)=E(w w w w )=E(w )E(w2)E(w )=0=γ(−1), x t+1 t t t−1 t+1 t t−1 andsimilarcomputationsestablishthatγ (h)=0,for|h|≥1.Theseriesiswhitenoise. x 1.16 (a) Fort=1,2,..., (cid:90) 1 1 (cid:12)(cid:12)1 1 (cid:2) (cid:3) E(xt)= sin(2πut)du=−2πtcos(2πut)(cid:12)(cid:12) =−2πt cos(2πt)−1 =0; 0 0 (cid:90) 1 γ(h)= sin[2πu(t+h)]sin[2πut]du. 0 Usingtheidentity2sin(α)sin(β)=cos(α−β)−cos(α+β)givesγ(0)=1/2andγ(h)=0,forh(cid:54)=0. 6 1 Chapter1Solutions (b) This part of the problem is harder than it seems at first and it might be a good idea to omit it in moreelementarypresentations.Theeasiestwaytotackletheproblemistocalculatesomeprobabilities (whichcanbegivenasahint),e.g.,becauseU isuniformon[0,1], Pr{x ≤0, x ≤0}=Pr{sin(2πU)≤0, sin(2π3U)≤0} 1 3 (cid:110) (cid:16) (cid:17)(cid:111) 1 =Pr U ∈[1/2,1]∩ [1/6,1/3]∪[1/2,2/3]∪[5/6,1] =0+1/6+1/6= 3 but,similarly, 1 Pr[x ≤0,x ≤0]= . 2 4 4 1.17 (a) Theessentialpartoftheexponentofthecharacteristic[ormomentgenerating]functionis n n (cid:88) (cid:88) λ x = λ (w −θw ) j j j j j−1 j=1 j=1 n−1 (cid:88) =−λ θw + (λ −θλ )w +λ w . 1 0 j j+1 j n n j=1 Becausethew areindependentandidenticallydistributed,thecharacteristicfunctioncanbewrittenas t n−1 (cid:89) φ(λ ,...,λ )=φ (−λ θ) φ (λ −θλ )φ (λ ) 1 n w 1 w j j+1 w n j=1 (b) Becausethejointdistributionofthew willnotchangesimplybyshiftingx ,...,x tox ,...,x , j 1 n 1+h n+h thecharacteristicfunction[orMGF]remainsthesame. 1.18 Lettingk =j+h,holdingj fixedaftersubstitutingfrom(1.29)yields (cid:88)∞ (cid:88)∞ (cid:12) (cid:88)∞ (cid:12) (cid:88)∞ (cid:88)∞ |γ(h)|=σ2 (cid:12) ψ ψ (cid:12)≤σ2 |ψ ||ψ | w (cid:12) j+h j(cid:12) w j+h j h=−∞ h=−∞ j=−∞ h=−∞j=−∞ ∞ ∞ (cid:88) (cid:88) =σ2 |ψ | |ψ |<∞. w k j k=−∞ j=−∞ 1.19 (a) E(x )=E(µ+w +θw )=µ. t t t−1 (b) γ(h)=cov(w +θw ,w +θw ,soγ (0)=(1+θ2)σ2,γ (±1)=θσ2,and0otherwise. t+h t+h−1 t t−1 x w x w (c) From(a)and(b)weseethat,foranyθ,boththemeanfunctionandautocovariancefunctionareindependent oftime. (d) Fromthegivenformula,andbecauseγ (h)=0for|h|>1,wehave x (cid:20) (cid:21) 1 2(n−1) var(x¯)= γ (0)+ γ (1) . n x n x • Whenθ =0,γ (±1)=0andit’stheclassicalcase,var(x¯)=σ2/n. x w • When θ = 1, γx(0) = 2σw2 and γx(±1) = σw2, so var(x¯) = σnw2 [2+ 2(nn−1)] = σnw2 [4− n2]. In this case,thevarianceisabout4timesaslargeastheuncorrelatedcase. • When θ = −1, γx(0) = 2σw2 and γx(±1) = −σw2, so var(x¯) = σnw2 [2− 2(nn−1)] = σnw2 [n2]. In this case, the variance is smaller (n > 2) than the uncorrelated case and the variance is nearly zero for moderaten. 1 Chapter1Solutions 7 (e) It’seasiertoestimatethemeanwhenθisnegative.Negativelycorrelateddatavaryaroundthemeanmore tightlythanpositivelycorrelateddata.Inessence,youneedfewerdatatoidentifythemeanifthedataare negativelycorrelated. 1.20 Codeforparts(a)and(b)isbelow.Studentsshouldhaveabout1in20ACFvalueswithinthebounds, butthevaluesforpart(b)willbelargeringeneralthanforpart(a). wa = rnorm(500,0,1) wb = rnorm(50,0,1) par(mfrow=c(2,1)) (acf(wa, 20)) # plot and print results (acf(wb, 20)) # plot and print results 1.21 This is similar to the previous problem. Generate 2 extra observations due to loss of the end points in makingtheMA. wa = rnorm(502,0,1) wb = rnorm(52,0,1) va = filter(wa, sides=2, rep(1,3)/3) vb = filter(wb, sides=2, rep(1,3)/3) par(mfrow=c(2,1)) (acf(va, 20, na.action = na.pass)) # plot and print results (acf(vb, 20, na.action = na.pass)) # plot and print results 1.22 Generate the data as in Problem 1.2 and then type acf(x). The sample ACF will exhibit significant correlations at one cycle every four lags, which is the same frequency as the signal. (The process is not stationarybecausethemeanfunctionisthesignal,whichdependsontimet.) 1.23 ThesampleACFshouldlooksinusoidal,makingonecycleevery50lags. x = 2*cos(2*pi*(1:500)/50 + .6*pi)+ rnorm(500,0,1) acf(x,100) 1.24 γ (h) = cov(y ,y ) = cov(x − .7x ,x − .7x ) = 0 if |h| > 1 because the x s are y t+h t t+h t+h−1 t t−1 t independent.Whenh=0,γ (0)=σ2(1+.72),whereσ2isthevarianceofx .Whenh=1,γ (1)=−.7σ2. y x x t y x Thus,ρ (1)=−.7/(1+.72)=−.47 y 1.25 (a) Thevarianceisalwaysnon-negative,soforx astationaryseries t (cid:26) n (cid:27) (cid:26) (cid:27) n n (cid:88) (cid:88) (cid:88) (cid:88)(cid:88) var a x =cov a x , a x = a γ(s−t)a =aaa(cid:48)Γaaa≥0, s s s s t t s t s=1 s t s=1t=1 thusΓ ={γ(s−t),s,t=1,...,n}isanon-negativedefinitematrix. (b) LetY =x −x¯fort=1...nandconstructthen×2nmatrix t t 0 0 ··· 0 Y Y ··· Y  1 2 n 0 ··· 0 Y Y ··· Y 0  1 2 n  D =.. .. ..  . . .  0 Y Y ··· Y 0 ··· 0 1 2 n WithΓˆ ={γˆ(s−t)}n ,itiseasytoshowthat n s,t=1 1 Γˆ = DD(cid:48). n n Then,foranyvectoraaa∈IRn, n aaa(cid:48)Γˆ aaa= 1aaa(cid:48)DD(cid:48)aaa= 1ccc(cid:48)ccc=(cid:88)c2 ≥0 n n n i i=1 8 1 Chapter1Solutions forccc=D(cid:48)aaa.NotingthatΓˆ issymmetric,itwillbepositivedefinite(p.d.)ifitseigenvaluesarepositive. n SincethediagonalelementsofΓˆ areγˆ(0),thesumoftheeigenvaluesofΓˆ isnγˆ(0).Consequently,Γˆ n n n isp.d.aslongasγˆ(0)>0.Inotherterms,ifthesamplevarianceofthedataisnotzero,Γˆ isp.d. n 1.26 (a) N n Ex¯ = 1 (cid:88)x = 1 (cid:88)µ = Nµt =µ t N jt N t N t j=1 j=1 (b) N N N 1 (cid:88)(cid:88) 1 (cid:88) 1 E[(x¯ −µ )2]= E(x −µ )(x −µ )= e2 = γ (t,t) t t N2 jt t kt t N2 jt N e j=1k=1 j=1 (c) Aslongastheseparateseriesareobservingthesamesignal,wemayassumethatthevariancegoesdown proportionallytothenumberseriesasintheiidcase.Ifnormalityisreasonable,pointwise100(1−α)% intervalscanbecomputedas √ x¯ ±z γ (t,t)/ N t α/2 e 1.27 1 1 V (h)= E[(x −µ)−(x −µ)]2 = [γ(0)−γ(h)−γ(−h)+γ(0)]=γ(0)−γ(h). x 2 s+h s 2 1.28 Thenumeratoranddenominatorofρˆ(h)are 1 n(cid:88)−h β2(cid:20)n(cid:88)−h n(cid:88)−h (cid:21) γˆ(h)= [β (t−t¯)+β h][β (t−t¯)]= 1 (t−t¯)2+h (t−t¯) n 1 1 1 n t=1 t=1 t=1 and β2 (cid:88)n γˆ(0)= 1 (t−t¯)2. n t=1 Now,writethenumeratoras β2(cid:20) (cid:88)n (cid:88)n (cid:21) γˆ(h)=γˆ(0)+ 1 − (t−t¯)2−h (t−t¯) n t=n−h+1 t=n−h+1 Hence,wecanwrite ρˆ(h)=1+R where β2 (cid:20) (cid:88)n (cid:88)n (cid:21) R= 1 − (t−t¯)2−h (t−t¯) nγˆ(0) t=n−h+1 t=n−h+1 isaremaindertermthatneedstoconvergetozero.Wecanevaluatethetermsintheremainderusing m (cid:88) m(m+1) t= 2 t=1 and m (cid:88) m(2m+1)(m+1) t2 = 6 t=1 Thedenominatorreducesto 1 Chapter1Solutions 9 (cid:20) n (cid:21) (cid:88) nγˆ(0)=β2 t2−nt¯2 1 t=1 (cid:20)n(n+1)(2n+1) n(n+1)2(cid:21) =β2 − 1 6 4 n(n+1)(n−1) =β2 , 1 12 whereasthenumeratorcanbesimplifiedbylettings=t−n+hsothat β2 (cid:20) (cid:88)h (cid:88)h (cid:21) R= 1 − (s+n−h−t¯)2−h (s+n−h−t¯) nγˆ(0) s=1 s=1 ThetermsinthenumeratorofR areO(n2),whereasthedenominatorisO(n3)sothattheremainderterm convergestozero. 1.29 (a) √ E[x¯2] Pr{ n|x¯|>(cid:15)}≤n (cid:15)2 Notethat, ∞ (cid:88) nE[x¯2]→ γ(h)=0, u=−∞ wherethelaststepemploysthesummabilitycondition.Thevarianceofx¯isderivedin(1.33). (b) Anexampleofsuchaprocessisx =∇w =w −w ,wherew iswhitenoise.Thissituationarises t t t t−1 t whenastationaryprocessisover-differenced(i.e.,w isalreadystationary,so∇w wouldbeconsidered t t over-differencing). 1.30 Lety =x −µ andwritethedifferenceas t t x n n−h n1/2(cid:0)γ˜(h)−γˆ(h)(cid:1)=n−1/2(cid:88)y y −n−1/2(cid:88)(y −y¯)(y −y¯) t+h t t+h t t=1 t=1 (cid:20) n n−h n−h (cid:21) (cid:88) (cid:88) (cid:88) =n−1/2 y y +y¯ y +y¯ y −(n−h)y¯2 t+h t t t+h t=n−h+1 t=1 t=1 Forthefirstterm (cid:20) n (cid:21) n E n−1/2(cid:12)(cid:12) (cid:88) yt+hyt(cid:12)(cid:12) ≤ n−1/2E (cid:88) |yt+hyt| t=n−h+1 t=n−h+1 n (cid:88) ≤ n−1/2 E1/2[y2 ]E1/2[y2] t+h t t=n−h+1 = n−1/2hγ (0) x →0, as n → ∞. Applying the Markov inequality in the hint then shows that the first term is o (1). In order to p handletheotherterms,whichdiffertriviallyfromn−1/2ny¯2,notethat,fromTheoremA.5,n1/2y¯converging indistributiontoastandardnormalimpliesthatny¯2convergesindistributiontoachi-squarerandomvariable with1degreeoffreedomandhenceny¯2 =O (1).Hence,n−1/2ny¯2 =n−1/2O (1)=o (1)andtheresult p p p isproved.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.