ebook img

Strongly Consistent Model Order Selection for Estimating 2-D Sinusoids in Colored Noise PDF

0.26 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Strongly Consistent Model Order Selection for Estimating 2-D Sinusoids in Colored Noise

Strongly Consistent Model Order Selection for Estimating 2-D Sinusoids in Colored Noise 8 0 Mark Kliger and Joseph M. Francos ∗ 0 2 n a J 8 Abstract 1 ] We consider the problem of jointly estimating the number as well as the pa- E M rameters of two-dimensional sinusoidal signals, observed in the presence of an . additive colored noise field. We begin by elaborating on the least squares estima- t a tion of 2-D sinusoidal signals, when the assumed number of sinusoids is incorrect. t s [ In the case where the number of sinusoidal signals is under-estimated we show 1 the almost sure convergence of the least squares estimates to the parameters of v the dominant sinusoids. In the case where this number is over-estimated, the 0 9 estimated parameter vector obtained by the least squares estimator contains a 7 2 sub-vector that converges almost surely to the correct parameters of the sinu- . 1 soids. Based on these results, we prove the strong consistency of a new model 0 8 order selection rule. 0 : v i Keywords: Two-dimensional random fields; model order selection; least squares estimation; X strong consistency. r a ∗ M. Kliger is with the Department of Electrical Engineering and Computer Science , University of Michigan, Ann Arbor, MI, 48109-2122,USA. Tel: (734) 647-8389FAX: (734) 763-8041,email: [email protected]. J.M.Francosis withthe DepartmentofElectricalandComputerEngineering,Ben-GurionUniversity,Beer-Sheva 84105,Israel. Tel: +972 8 6461842,FAX: +972 8 6472949,email: [email protected] 1 1 Introduction We consider the problem of jointly estimating the number as well as the parameters of two- dimensional sinusoidal signals, observed in the presence of an additive noise field. This problem is, in fact, a special case of a much more general problem, [5]: From the 2-D Wold-like decomposition we have that any 2-D regular and homogeneous discrete random field can be represented as a sum of two mutually orthogonal components: a purely-indeterministic field and a deterministic one. In this paper we consider the special case where the deterministic component consists of a finite (unknown) number of sinusoidal components, while the purely-indeterministic component is an infinite order non-symmetrical half plane, (or a quarter-plane), moving average field. This modeling and estimation problem has fundamental theoretical importance, as well as various applications in texture estimation of images (see, e.g., [4] and the references therein) and in wave propagation problems (see, e.g., [14] and the references therein). Many algorithms have been devised to estimate the parameters of sinusoids observed in white noiseandonlyasmallfractionofthederivedmethodshasbeenextendedtothecasewherethenoise fieldiscolored(see, e.g.,Francoset. al. [3],He[8],KunduandNandi[11],LiandStoica[12],Zhang and Mandrekar [13], and the references therein). Most of these assume the number of sinusoids is a-priori known. However this assumption does not always hold in practice. In the past three decades the problem of model order selection for 1-D signals has received considerable attention. In general, model order selection rules are based (directly or indirectly) on three popular criteria: Akaike information criterion (AIC), the minimum description length (MDL), and the maximum a-posteriori probability criterion (MAP). All these criteria have a common form composed of two terms: adatatermandapenaltyterm,wherethedatatermisthelog-likelihoodfunctionevaluated for the assumed model. The problem of modelling multidimensional fields has received much less attention. In [9], a MAP model order selection criterion for jointly estimating the number and the parameters of two-dimensional sinusoids observed in the presence of an additive white Gaussian noise field, is derived. In [10], we proved the strong consistency of a large family of model order selection rules, which includes the MAP based rule in [9] as a special case. In this paper we derive a strongly consistent model order selection rule, for jointly estimating the number of sinusoidal components and their parameters in the presence of colored noise. This derivation extends the results of [10] to the case where the additive noise is colored, modeled by an infinite order non-symmetrical half-plane or quarter-plane moving average representation, such that the noise field is not necessarily Gaussian. To the best of our knowledge this is the most general result available in the area of model-order selection rules of 2-D random fields with mixed spectrum. 1 The proposed criterion has the usual form of a data term and a penalty term, where the first is the least squares estimator evaluated for the assumed model order and the latter is proportional to the logarithm of the data size. Since we evaluate the data term for any assumed model order, including incorrect ones, we should consider the problem of least squares estimation of the parameters of 2-D sinusoidal signals when the assumed number of sinusoids is incorrect. Let P denote the number of sinusoidal signals in the observed field and let k denote their assumed number. In the case where the number of sinusoidal signals is under-estimated, i.e., k < P, we prove the almost sure convergence of the least squares estimates to the parameters of the k dominant sinusoids. In the case where the number of sinusoidal signals is over-estimated, i.e., k > P, we prove the almost sure convergence of the estimates obtained by the least squares estimator to the parameters of the P sinusoids in the observed field. The additional k P components assumed to exist, are assigned by the least − squares estimator to the dominant components of the periodogram of the noise field. Finally, using this result, we prove the strong consistency of a new model order selection criterion and show how different assumptions regarding a noise field parameters affect the penalty term of the criterion. The proposed criterion completely generalized the previous results [9], [10], and provides a strongly consistent estimator of the number as well as of the parameters of the sinusoidal components. 2 Notations, Definitions and Assumptions Let y(n,m) be a real valued field, { } P y(n,m) = ρ0cos(ω0n+υ0m+ϕ0)+w(n,m), (1) i i i i i=1 X where 0 n N 1, 0 m M 1 andfor each i, ρ0 is non-zero. Dueto physical considerations ≤ ≤ − ≤ ≤ − i it is further assumed that for each i, ρ0 is bounded . | i| Recall that the non-symmetrical half-plan total-order is defined by (i,j) (s,t) iff (i,j) (k,l) k = s,l t (k,l) k > s, l . (2) (cid:23) ∈ { | ≥ }∪{ | −∞ ≤ ≤ ∞} Let D be an infinite order non-symmetrical half-plane support, defined by D = (i,j) Z2 : i = 0,0 j (i,j) Z2 : 0 < i , j . (3) ∈ ≤ ≤ ∞ ∪ ∈ ≤ ∞ −∞ ≤ ≤ ∞ (cid:8) (cid:9) (cid:8) (cid:9) 2 Hence the notations (r,s) D and (r,s) (0,0) are equivalent. ∈ (cid:23) We assume that w(n,m) is an infinite order non-symmetrical half-plane MA noise field, i.e., { } w(n,m) = a(r,s)u(n r,m s), (4) − − (r,s) D X∈ such that the following assumptions are satisfied: Assumption 1: The field u(n,m) is an i.i.d. real valued zero-mean random field with { } finite variance σ2, such that E[ u(n,m) α] < for some α > 3 . | | ∞ Assumption 2: The sequence a(i,j) is an absolutely summable deterministic sequence, i.e., a(r,s) < . (5) | | ∞ (r,s) D X∈ Let f (ω,υ) denote the spectral density function of the noise field w(n,m) . Hence, w { } 2 f (ω,υ) = σ2 a(r,s)ej(ωr+υs) . (6) w (cid:12)(r,s) D (cid:12) (cid:12) X∈ (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Assumption 3: The spatial frequencies (ω0,υ0) (0,2π) (0,2π), 1 i P are pairwise i i ∈ × ≤ ≤ different. In other words, ω0 = ω0 or υ0 = υ0, when i = j. i 6 j i 6 j 6 Let Ψ be a sequence of rectangles such that Ψ = (n,m) Z2 0 n N 1,0 m i i i { } { ∈ | ≤ ≤ − ≤ ≤ M 1 . i − } Definition 1: The sequence of subsets Ψ is said to tend to infinity (we adopt the notation i { } Ψ ) as i if i → ∞ → ∞ lim min(N ,M ) = , i i i ∞ →∞ and 0 < lim(N /M ) < . i i i ∞ →∞ Tosimplifynotations,weshallomitinthefollowingthesubscripti. Thus, thenotationΨ(N,M) → implies that both N and M tend to infinity as functions of i, and at roughly the same rate. ∞ Definition 2: Let Θ be a bounded and closed subset of the 4k dimensional space Rk k × ((0,2π) (0,2π))k [0,2π)k where for any vector θ = (ρ ,ω ,υ ,ϕ ,...,ρ ,ω ,υ ,ϕ ) Θ the k 1 1 1 1 k k k k k × × ∈ coordinate ρ is non-zero and bounded for every 1 i k while the pairs (ω ,υ ) are pairwise i i i ≤ ≤ different, so that no two regressors coincide. We shall refer to Θ as the parameter space. k From the model definition (1) and the above assumptions it is clear that θ0 = (ρ0,ω0,υ0,ϕ0,...,ρ0,ω0,υ0,ϕ0) Θ . k 1 1 1 1 k k k k ∈ k 3 Define the loss function due to the error of the k-th order regression model N 1M 1 k 2 1 − − (θ ) = y(n,m) ρ0cos(ω0n+υ0m+ϕ0) . (7) Lk k NM − i i i i n=0 m=0(cid:18) i=1 (cid:19) X X X ˆ A vector θ Θ that minimizes (θ ) is called the Least Square Estimate (LSE). In the case k k k k ∈ L where k = P, the LSE is a strongly consistent estimator of θ0 (see, e.g., [11] and the references P therein). 3 Strong Consistency of the Over- and Under-Determined LSE In the following subsections we establish the strong consistency of this LSE when the number of sinusoids is under-estimated, or over-estimated. The first theorem establishes the strong consis- tency of the least squares estimator in the case where the number of the regressors is lower than the actual number of sinusoids. The second theorem establishes the strong consistency of the least squares estimator in the case where the number of the regressors is higher than the actual number of sinusoids. 3.1 Consistency of the LSE for an Under-Estimated Model Order Let k denote the assumed number of observed 2-D sinusoids, where k < P. For any δ > 0, define the set ∆ to be a subset of the parameter space Θ such that each vector θ ∆ is different δ k k δ ∈ from the vector θ0 by at least δ, at least in one of its coordinates, i.e., k k k k k ∆ = Φ W V , (8) δ iδ iδ iδ iδ R ∪ ∪ ∪ " # " # " # " # i=1 i=1 i=1 i=1 [ [ [ [ where = θ Θ : ρ ρ0 δ;δ > 0 , Riδ k ∈ k | i − i| ≥ Φiδ = (cid:8)θk ∈ Θk : |ϕi −ϕ0i| ≥ δ;δ > 0(cid:9) , Wiδ =(cid:8)θk ∈ Θk : |ωi−ωi0| ≥ δ;δ > 0(cid:9) , Viδ = (cid:8)θk ∈ Θk : |υi −υi0| ≥ δ;δ > 0 (cid:9). (9) (cid:8) (cid:9) 4 To prove the main result of this section we shall need an additional assumption and the following lemmas: Assumption 4: For convenience, and without loss of generality, we assume that the sinusoids are indexed according to a descending order of their amplitudes, i.e., ρ0 ρ0 ...ρ0 > ρ0 ... ρ0 > 0 , (10) 1 ≥ 2 ≥ k k+1 ≥ P where we assume that for a given k, ρ0 > ρ0 to avoid trivial ambiguities resulting from the case k k+1 where the k-th dominant component is not unique. Lemma 1. liminf inf (θ ) (θ0) > 0 a.s. (11) Ψ(N,M) θk ∆δ Lk k −Lk k →∞ ∈ (cid:0) (cid:1) Proof: See Appendix A for the proof. Lemma 2. Let x ,n 1 be a sequence of random variables. Then n { ≥ } Pr x 0 i.o. Pr liminfx 0 , (12) n n { ≤ } ≤ { n ≤ } →∞ where the abbreviation i.o. stands for infinitely often. Proof: See Appendix B for the proof. The next theorem establishes the strong consistency of the least squares estimator in the case where the number of the regressors is lower than the actual number of sinusoids. Theorem 1. Let Assumptions 1-4 be satisfied. Then, the k-regressor parameter vector θˆ = k (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ ,ωˆ ,υˆ ,ϕˆ ) that minimizes (7) is a strongly consistent estimator of θ0 = 1 1 1 1 k k k k k (ρ0,ω0,υ0,ϕ0,...,ρ0,ω0,υ0,ϕ0) as Ψ(N,M) . That is, 1 1 1 1 k k k k → ∞ θˆ θ0 a.s. as Ψ(N,M) . (13) k → k → ∞ Proof: The proof follows an argument proposed by Wu [15], Lemma 1. Let ˆ θ = (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ,ωˆ ,υˆ ,ϕˆ ) be a parameter vector that minimizes (7). Assume that the k 1 1 1 1 k k k k proposition θˆ θ0 a.s. as Ψ(N,M) is not true. Then, there exists some δ > 0, such that k → k → ∞ ([1], Theorem 4.2.2, p. 69), ˆ Pr(θ ∆ i.o.) > 0. (14) k δ ∈ ˆ This inequality together with the definition of θ as a vector that minimizes implies k k L Pr( inf (θ ) (θ0) 0 i.o.) > 0. (15) θk ∆δ Lk k −Lk k ≤ ∈ (cid:0) (cid:1) 5 Using Lemma 2 we obtain Pr( liminf inf (θ ) (θ0) 0) Pr( inf (θ ) (θ0) 0 i.o.) > 0, (16) Ψ(N,M) θk ∆δ Lk k −Lk k ≤ ≥ θk ∆δ Lk k −Lk k ≤ →∞ ∈ ∈ (cid:0) (cid:1) (cid:0) (cid:1) which contradicts (11). Hence, θˆ θ0 a.s. as Ψ(N,M) . (17) k → k → ∞ Remark: Lemma 1 and Theorem 1 remain valid even under less restrictive assumptions regarding the noise field w(n,m) . If the field u(n,m) is an i.i.d. real valued zero-mean { } { } random field with finite variance σ2, and the sequence a(i,j) is a square summable deterministic sequence, i.e., a2(r,s) < , then Lemma 1 and Theorem 1 hold. (r,s) D ∞ ∈ P 3.2 Consistency of the LSE for an Over-Estimated Model Order Let k denote the assumed number of observed 2-D sinusoids, where k > P. Without loss of generality, we can assume that k = P + 1, (as the proof for k P + 1 follows immediately ≥ by repeating the same arguments). Let the periodogram (scaled by a factor of 2) of the field w(n,m) be given by { } 2 N 1M 1 2 − − I (ω,υ) = w(n,m)e j(nω+mυ) . (18) w − NM (cid:12) (cid:12) (cid:12)Xn=0 mX=0 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) The parameter spaces Θ , Θ are defined as in Definition 2. P P+1 (cid:12) (cid:12) Theorem 2. Let Assumptions 1-4 be satisfied. Then, the parameter vector ˆ θ = (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ ,ωˆ ,υˆ ,ϕˆ ,ρˆ ,ωˆ ,υˆ ,ϕˆ ) Θ that minimizes (7) with P+1 1 1 1 1 P P P P P+1 P+1 P+1 P+1 P+1 ∈ ˆ k = P+1regressorsasΨ(N,M) iscomposedofthe vectorθ = (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ ,ωˆ ,υˆ ,ϕˆ ) P 1 1 1 1 P P P P → ∞ which is a strongly consistent estimator of θ0 = (ρ0,ω0,υ0,ϕ0,...,ρ0,ω0,υ0,ϕ0) as Ψ(N,M) P 1 1 1 1 P P P P → ; of the pair of spatial frequencies (ωˆ ,υˆ ) that maximizes the periodogram of the observed P+1 P+1 ∞ realization of the field w(n,m) , i.e., { } (ωˆ ,υˆ ) = argmax I (ω,υ), (19) P+1 P+1 w (ω,υ) (0,2π)2 ∈ and of the element ρˆ that satisfies P+1 2 ρˆ2 = I (ωˆ ,υˆ ) . (20) P+1 NM w P+1 P+1 6 Proof: Let θ = (ρ ,ω ,υ ,ϕ ,...,ρ ,ω ,υ ,ϕ ,ρ ,ω ,υ ,ϕ ), be some vector P+1 1 1 1 1 P P P P P+1 P+1 P+1 P+1 in the parameter space Θ . We have, P+1 N 1M 1 P+1 2 (θ ) = 1 − − y(n,m) ρ cos(ω n+υ m+ϕ ) LP+1 P+1 NM − i i i i n=0 m=0(cid:18) i=1 (cid:19) N 1M 1 P P P P 2 = 1 − − y(n,m) ρ cos(ω n+υ m+ϕ ) NM − i i i i n=0 m=0(cid:18) i=1 (cid:19) NP1MP1 P 2 + 1 − − ρ cos(ω n+υ m+ϕ ) NM P+1 P+1 P+1 P+1 n=0 m=0(cid:18) (cid:19) NP1MP1 P 2 − − y(n,m) ρ cos(ω n+υ m+ϕ ) ρ cos(ω n+υ m+ϕ ) −NM − i i i i P+1 P+1 P+1 P+1 n=0 m=0(cid:18) i=1 (cid:19)(cid:18) (cid:19) = (θP)+Pρ2P+1 + 1 N−P1M−1ρ2 cos(2ω n+2υ m+2ϕ ) LP P 2 2NM P+1 P+1 P+1 P+1 n=0 m=0 N 1M 1 P P 2 − − w(n,m)ρ cos(ω n+υ m+ϕ ) −NM P+1 P+1 P+1 P+1 n=0 m=0 NP1MP1 P P 2 − − ρ0cos(ω0n+υ0m+ϕ0) ρ cos(ω n+υ m+ϕ ) −NM i i i i − i i i i n=0 m=0(cid:18)i=1 i=1 (cid:19) P P P P ρ cos(ω n+υ m+ϕ ) = H (θ )+H (θ )+H (θ ) P+1 P+1 P+1 P+1 1 P+1 2 P+1 3 P+1 (cid:18) (cid:19) (21) where, θ = (ρ ,ω ,υ ,ϕ ,...,ρ ,ω ,υ ,ϕ ) Θ and, P 1 1 1 1 P P P P P ∈ H (θ ) = (ρ ,ω ,υ ,ϕ ,...,ρ ,ω ,υ ,ϕ ) = (θ ), (22) 1 P+1 P 1 1 1 1 P P P P P P L L ρ2 2 N−1M−1 H (θ ) = P+1 w(n,m)ρ cos(ω n+υ m+ϕ ), (23) 2 P+1 P+1 P+1 P+1 P+1 2 − NM n=0 m=0 X X N 1M 1 1 − − H (θ ) = ρ2 cos(2ω n+2υ m+2ϕ ) 3 P+1 2NM P+1 P+1 P+1 P+1 n=0 m=0 X X N 1M 1 P P 2 − − ρ0cos(ω0n+υ0m+ϕ0) ρ cos(ω n+υ m+ϕ ) −NM i i i i − i i i i n=0 m=0(cid:18) i=1 i=1 (cid:19) X X X X ρ cos(ω n+υ m+ϕ ) . (24) P+1 P+1 P+1 P+1 (cid:18) (cid:19) ˆ Let θ = (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ ,ωˆ ,υˆ ,ϕˆ ) be a vector in Θ that minimizes H (θ ) = P 1 1 1 1 P P P P P 1 P+1 (θ ). From [11] (or using Theorem 1 in the previous section), P P L θˆ θ0 a.s. as Ψ(N,M) . (25) P → P → ∞ The function H is a function of ρ ,ω ,υ ,ϕ only. Evaluating the partial derivatives 2 P+1 P+1 P+1 P+1 ofH withrespect tothesevariables, itiseasytoverify thattheextremum pointsofH arealsothe 2 2 7 extremum points oftheperiodogramofthe realizationofthe noise field. Moreover, let ρe,ωe,υe,ϕe denote an extremum point of H . Then at this point 2 I (ωe,υe) H (ρe,ωe,υe,ϕe) = w . (26) 2 − NM Hence, the minimal value of H is obtained at the coordinates ρ ,ω ,υ ,ϕ where 2 P+1 P+1 P+1 P+1 the periodogram of w(n,m) is maximal. Let ρˆ ,ωˆ ,υˆ ,ϕˆ denote the coordinates that P+1 P+1 P+1 P+1 { } minimize H . Then we have 2 (ωˆ ,υˆ ) = argmin H (ρ ,ω ,υ ,ϕ ) = argmax I (ω,υ), (27) P+1 P+1 2 P+1 P+1 P+1 P+1 w (ω,υ) (0,2π)2 (ω,υ) (0,2π)2 ∈ ∈ and 2 ρˆ2 = I (ωˆ ,υˆ ). (28) P+1 NM w P+1 P+1 By Assumption 1, 2 and Theorem 1, [13], we have supI (ω,υ) = O(logNM). (29) w ω,υ Therefore, logNM H (ρˆ ,ωˆ ,υˆ ,ϕˆ ) = O . (30) 2 P+1 P+1 P+1 P+1 NM (cid:18) (cid:19) ˆ ˆ Let θ Θ be the vector composed of the elements of the vector θ Θ and of P+1 P+1 P P ∈ ∈ ρˆ ,ωˆ ,υˆ ,ϕˆ , defined above, i.e., P+1 P+1 P+1 P+1 ˆ θ = (ρˆ ,ωˆ ,υˆ ,ϕˆ ,...,ρˆ ,ωˆ ,υˆ ,ϕˆ ,ρˆ ,ωˆ ,υˆ ,ϕˆ ). P+1 1 1 1 1 P P P P P+1 P+1 P+1 P+1 We need to verify that this vector minimizes (θ ) on Θ as Ψ(N,M) . P+1 P+1 P+1 L → ∞ Recall that for ω (0,2π) and ϕ [0,2π) ∈ ∈ N−1 sin [N 1]ω +ϕ +sin ω ϕ cos(ωn+ϕ) = − 2 2 − = O(1). (31) 2sin ω n=0 (cid:0) (cid:1)2 (cid:0) (cid:1) X (cid:0) (cid:1) Hence, as N → ∞ N 1 1 − cos(ωn+ϕ) = o(1) , (32) logN n=0 X and consequently N 1 1 − logN cos(ωn+ϕ) = o . (33) N N n=0 (cid:18) (cid:19) X 8 Next, we evaluate H . Consider the first term in (24). By (33) we have 3 N 1M 1 1 − − logNM ρ2 cos(2ω n+2υ m+2ϕ ) = o , (34) 2NM P+1 P+1 P+1 P+1 NM n=0 m=0 (cid:18) (cid:19) X X for any set of values ρ ,ω ,υ ,ϕ may assume. P+1 P+1 P+1 P+1 Consider the second term in (24). By (33) and unless there exists some i, 1 i P, such ≤ ≤ that (ω ,υ ) = (ω0,υ0), we have as Ψ(N,M) , P+1 P+1 i i → ∞ N 1M 1 P 1 − − logNM ρ0ρ cos(ω0n+υ0m+ϕ0)cos(ω n+υ m+ϕ ) = o , (35) NM i P+1 i i i P+1 P+1 P+1 NM n=0 m=0 i=1 (cid:18) (cid:19) X X X for any set of values ρ ,ω ,υ ,ϕ may assume. P+1 P+1 P+1 P+1 Assume now that there exists some i, 1 i P, such that (ω ,υ ) = (ω0,υ0). Since by ≤ ≤ P+1 P+1 i i assumption there are no two different regressors with identical spatial frequencies, it follows that one of the estimated frequencies (ω ,υ ) is due to noise contribution. Hence, by interchanging the i i roles of (ω ,υ ) and (ω ,υ ), and repeating the above argument we conclude that this term P+1 P+1 i i has the same order as in (35). Similarly, for the third term in (24): By (33) and unless there exists some i, 1 i P, such that (ω ,υ ) = (ω ,υ ), we have as Ψ(N,M) , P+1 P+1 i i ≤ ≤ → ∞ N 1M 1 P 1 − − logNM ρ ρ cos(ω n+υ m+ϕ )cos(ω n+υ m+ϕ ) = o . (36) i P+1 i i i P+1 P+1 P+1 NM NM n=0 m=0 i=1 (cid:18) (cid:19) X X X However such i for which (ω ,υ ) = (ω ,υ ) cannot exist, as this amounts to reducing the P+1 P+1 i i number of regressors from P + 1 to P, as two of them coincide. Hence, for any θ Θ as P+1 P+1 ∈ Ψ(N,M) → ∞ logNM H (θ ) = o . (37) 3 P+1 NM (cid:18) (cid:19) On the other hand, the strong consistency (25) of the LSE under the correct model order assump- tion implies that as Ψ(N,M) the minimal value of (θ ) = σ2 a2(r,s) a.s., while → ∞ LP P (r,s) D ∈ from (30) we have for the minimal value of H that H (θ ) = O logNM . Hence, the value 2 2 P+1 PNM of H (θ ) at any point in Θ is negligible even relative to the values (θ ) and H (θ ) 3 P+1 p+1 (cid:0) (cid:1)P P 2 P+1 L assume at their respective minimum points. Therefore, evaluating (21) as Ψ(N,M) we have → ∞ (θ ) = (θ )+H (ρ ,ω ,υ ,ϕ )+H (θ ) P+1 P+1 P P 2 P+1 P+1 P+1 P+1 3 P+1 L L logNM = (θ )+H (ρ ,ω ,υ ,ϕ )+o . (38) P P 2 P+1 P+1 P+1 P+1 L NM (cid:18) (cid:19) Since (θ )isafunctionoftheparameter vectorθ andisindependent ofρ ,ω ,υ ,ϕ , P P P P+1 P+1 P+1 P+1 L while H is a function of ρ ,ω ,υ ,ϕ and is independent of θ , the problem of min- 2 P+1 P+1 P+1 P+1 P imizing (θ ) becomes separable as Ψ(N,M) . Thus minimizing (38) is equivalent P+1 P+1 L → ∞ 9

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.