ebook img

Root-Exponential Accuracy for Coarse Quantization of Finite Frame Expansions PDF

21 Pages·2011·0.2 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Root-Exponential Accuracy for Coarse Quantization of Finite Frame Expansions

Root-Exponential Accuracy for Coarse 1 Quantization of Finite Frame Expansions Felix Krahmer, Rayan Saab, and Rachel Ward Abstract In this note, we show that by quantizing the N-dimensional frame coefficients of signals in Rd using r-th- order Sigma-Deltaquantization schemes, it ispossible toachieve root-exponential accuracy inthe oversampling rate λ:=N/d.Inparticular,weconstructafamilyoffiniteframestailoredspecificallyforcoarseSigma-Deltaquantization that admit themselves as both canonical duals and Sobolev duals. Our construction allows for error guarantees that behave ase−c√λ, where under amildrestrictionon theoversampling rate,the constants areabsolute. Moreover, we showthatharmonicframescanbeusedtoachievethesameguarantees, butwiththeconstantsnowdepending ond. I. INTRODUCTION Signalquantizationisafundamentalprobleminsignalprocessing.ViewingasignalasavectorinRd,quantization involves replacing the vector with coefficients that are each chosen from a finite alphabet . In particular, one can A represent a vector x in Rd by a vector q in N, where N > d, in the following way. First, one computes a A finite-frame expansion y :=Ex, where E is an appropriately chosen full-rank matrix in RN d (see Section II for × a precise definition).Next, oneappliesa quantizationschemeto replace y with q. This approachwill be referredto asframequantizationinthesequel.Morespecifically,thequantizationschemeswestudyinthispaperaredesigned to allow for good linear reconstruction of x, i.e., we focus on approximationformulas of the form x:=Fq where F is one of the infinitely many left-inverses of E. e Clearly, the goal of a good quantization scheme is to allow for an accurate reconstruction of x from q. Thus, for reasonable frame quantization schemes, one expects that q N should allow for increasingly accurate and ∈ A robust approximation of x as N increases. In the following paragraphs, we will introduce two frame quantization schemes, the second of which, Σ∆ quantization, will be the main focus of this paper. Felix Krahmer is with the Hausdorff Center for Mathematics, Universita¨t Bonn, Bonn, Germany, email: [email protected] RayanSaabiswiththeDepartment ofMathematics, DukeUniversity, Durham,NC,email:[email protected] Rachel WardiswiththeDepartment ofMathematics, University ofTexasatAustin,Austin,TX,USA,email:[email protected] Thispaperwaspresented inpartatthe9thInternational Conference onSamplingTheoryandApplications (SAMPTA2011) September12,2011 DRAFT A. Memoryless scalar quantization Inthecontextofquantizationusingfinite-framerepresentations,themostintuitiveapproachismemorylessscalar quantization (MSQ), which requires replacing each coefficient of y = Ex with its nearest element from . That A is, y is replaced by q, where q = argmin y v . On the other hand, this naive approach treats each of the i i v | − | ∈A coefficients of y independently, and does not exploit the correlations between coefficients of y resulting from the e e lower-dimensional representation y = Ex. Goyal et al. [1] show that, even when using an optimal reconstruction scheme to approximate x from its MSQ quantized frame coefficients, the expected value of the error cannot be better than (λ 1). Here, the expectation is with respect to some probability measure on x that is, for example, − O absolutely continuous. One can do much better with other quantization schemes. In particular, Sigma-Delta (Σ∆) quantizationschemes are more complex,but can achieve better error rates than MSQ by exploitingthe redundancy inherent in y. B. Σ∆ quantization of oversampled bandlimited functions Σ∆schemeswereintroducedforthequantizationofoversampledbandlimitedfunctions[2],andhavesincebeen studiedextensively.Inthesettingofbandlimitedfunctions,theoversamplingrateλistheratiooftheactualsampling rate tothe Nyquistrate andthesignalisreconstructedfromthe samplesviaa low-passfilter. Sincethe time-shifted versions of the low-pass filter as they are used in the reconstruction formula form an infinite dimensional frame, this setup can be seen as analogous to the finite-frame case discussed in this paper. In particular, the oversampling rate in the frameworkof bandlimitedfunctions correspondsto the oversamplingrate for finite frame expansionsas above. Daubechies and Devore [3] showed that if the samples of a bandlimited function are quantized according to a stabler-th-orderΣ∆scheme,theL approximationerror f f is (λ r).Subsequently,Gu¨ntu¨rk[4]showed ∞ − k − k∞ O that certain 1-bit Σ∆ schemes (that is, = 1,1 ) can achieve exponential precision, i.e., an L error decay A {− } e ∞ of order e−C1λ, by choosing the order r as a function of λ. Here C1 < 1 is a small constant1. This work was improved on by Deift et al. [5], who showed that the above constant can be pushed to C 0.102. In order to 1 ≈ achieveexponentialprecision,theseworksuse stable familiesof r-th-orderΣ∆schemeswith approximationerrors bounded by C (r)λ r. For well behaved C (r), the optimal choice r#(λ) achieves exponential precision. 2 − 2 C. Σ∆ quantization of finite frame expansions The use of Σ∆ quantization in the setting of finite frames was first explored by Benedetto et al. [6]. In contrast to the setting of bandlimited functions where the error is most naturally measured with respect to the L -norm, ∞ in the finite-dimensional setting it is more amenable to measure error with respect to the Euclidean, i.e. ℓ (Rd), 2 metric. In [6], it was shown that with linear reconstruction, even first-order Σ∆ schemes outperform MSQ when 1Subsequently Ci willdenoteaconstant, indexed byorderofappearance. 2 the frames are sufficiently redundant and chosen from appropriate families. Subsequent work showed that it is possible to achieve error boundswith respect to the Euclidean metric that decay like (λ r). For example, in [7], − O Bodmann et al. proved that with tight frames of special design, r-th-order schemes achieve an error decay rate of (λ r), when the left-inverse of the matrix E used in linear reconstructionis the Moore-Penroseinverse. Using a − O differentapproach,Blum etal. [8] showedthatsuchan errorratecan be achievedbyusingalternativeleft-inverses, called Sobolevduals,for anyframethat arisesvia uniformsampling frompiecewisesmooth frame-paths.Recently, Gu¨ntu¨rk et al. [9] showed that for randomly-generatedframes, error bounds of (λ (r 1/2)α), for α (0,1), are − − O ∈ attainable via the use of Sobolev duals. In particular, the parameter α controls the probability (on the draw of the frame)withwhichtheresultholds.Thisallowed[9]toapplyΣ∆quantizationinthecontextofcompressedsensing [10], [11]. In this note, we combine the techniques of Blum et al. [8] and Gu¨ntu¨rk [4]/Deift et al. [5] to show that it is possible to achieve root-exponentialaccuracy in the finite frame setting. In particular,we show that for a family of tight frames of special design that admit themselves as Sobolev duals, and for harmonic frames, root-exponential error rates of (e C√λ) are achievable. − O Remark 1. In[7],Bodmannetal.studyr-thorderΣ∆schemesthatemployscalarquantizersoperatingon[ L,L]. − Their schemes require the input sequence,i.e., the frame expansionof x, to be boundedby L (2r 1)δ/2 where − − δ is the quantization step size. Consequently, there is an upper bound on admissible values of r for these schemes to work and this does not allow one to optimize the value of r freely as a function of λ. A similar issue arises in [9], where the framesare random.On the otherhand,in the bandlimitedsetting,[4] and[5] proposedΣ∆ schemes thatdonotsufferfromanr-dependentconstraintontheinputsequence.However,theinvolvedconstantsgrowinr. By freelyoptimizingr as a functionof λ, [4] and [5] one can balancethese effectsobtainingexponentialprecision in λ (measured in the L norm). In this paper, we use the Σ∆ schemes of [4] and [5] for frame quantization and ∞ Sobolev duals as in [8] for linear reconstruction. Consequently, we can freely optimize r as a function of λ. This allows us to obtain root-exponentialprecision in the ℓ (Rd) norm. 2 D. Organization of the paper The remainder of the paper is organizedas follows. In Section II, we introduce the relevantbasic concepts from frametheoryandwedescribeΣ∆quantization.InSectionIII,weconstructafamilyofframesthatadmitthemselves asbothcanonicalandSobolevdualsandwe show thattheyallowrootexponentialapproximationerrors.We derive explicit bounds on the constants; in particular, we show that the error is bounded by C3e−C4√λ, except for very small oversampling rates λ := N . (logd)2, where the constants C and C do not depend on the dimension d. d 3 4 In Section IV, we study the performanceof harmonicframes, showing that they too allow root-exponentialbounds on the reconstruction error, albeit without the explicit analysis of the dimension dependence of the error. Finally, in Section V, we include the results of numerical experiments showing that the effective decay rate of the error 3 as a function of λ, when using the proposed schemes, is indeed root-exponential. This highlights the fact that our mathematical analysis (for the proposed frames and reconstruction method) is not sub-optimal but matches the empirically observed error decay. II. PRELIMINARIES A. Finite frames We say that a finite collection of vectors e N is a frame for Rd with frame bounds 0<A B < if { n}n=1 ≤ ∞ N x Rd, A x 2 x,e 2 B x 2, (1) ∀ ∈ k k2 ≤ |h ni| ≤ k k2 n=1 X where denotes the Euclidean norm, and A and B are the largest and smallest numbers such that (1) holds, 2 k·k respectively. If A= B we say that the frame is tight. If e =1 for each n 1,...,N , then we say that the n 2 k k ∈{ } frame is unit-norm. Given the frame vectors e N , for convenience, we define the frame matrix E RN d { n}n=1 ∈ × with e as its k-th row. A matrix E RN d is thus a frame matrix if and only if it has rank d. Let x be a k × ∈ vector in Rd. Then we say that y = Ex is the frame expansion of x with respect to E. Equivalently we say that y ,n 1,...,N , are the frame coefficients of x. n ∈{ } Consider a frame f N and let F be the matrix whose k-th column is f . F is called a dual (or synthesis) { n}n=1 k frame associated with e N if the frame matrix F in Rd N satisfies FE =I , where I Rd d is the identity { n}n=1 × d d ∈ × matrix. In other words, a dual frame matrix F is a left inverse of E. As N > d, there are infinitely-many such left-inverses. In particular, the canonical dual frame (the Moore-Penrose inverse) of E is given by E :=(E E) 1E . † ∗ − ∗ B. Sigma-Delta Quantization of Finite Frame Expansions A midrise quantization alphabet is a set of the type = δ = (m 1/2)δ :1 m K,m Z . A AK {± − ≤ ≤ ∈ } For such an alphabet, we define the associated scalar quantizer Q:R , Q(h):=argmin h q . →A q | − | ∈A Aquantizationschemeisa procedurethatemployssuchaquantizerto representmulti-oreveninfinite-dimensional signals by a sequence of symbols from the alphabet . In the context of redundant representations, MSQ is the A most basic form of quantization;here, x in Rd is encodedby quantizingthe entries of its frame expansiony =Ex independentlytoobtainavectorq ofquantizedcoefficients,i.e.q =Q(y ).Subsequently,decodingisachievedby n n using a dual frame F to obtain the approximationx=Fq. However, as mentioned previously,MSQ is suboptimal since it makes no use of the fact that the frame E maps Rd to a d-dimensional subspace of RN, spanned by the e columns of E. On the other hand, Σ∆ schemes, a class of recursive algorithms first applied to the setting of finite 4 frame expansions in [6], explicitly make use of the dependencies in the vectors of the reconstruction frame F to achieve robust, high precision quantization (see, e.g., [8]). Adopting the notation generally more common in the framework of bandlimited functions ([4],[5]), a general r-th-order Σ∆ scheme with alphabet runs the following A iteration for n=1,2,...,N, q =Q(ρ(u ,u , ,u ,y )) n n 1 n 2 n r n − − ··· − (2) (∆ru) =y q . n n− n Heretheoperator∆r resultsfromrsubsequentconcatenationsofthefinitedifferenceoperator(∆w) =w w , n n n 1 − − ρ:Rr+1 R is a fixed function known as the quantization rule, and Q is the scalar quantizer associated with 7→ A as above. We refer to the sequence u as the state sequence. In vector form, (2) can be restated as n Dru=y q, (3) − where D is the first-order N N difference matrix defined by × 1, if i=j, Dij := −1, if i=j+1, (4) 0, otherwise. In this formulation, the iterative nature of (2) is reflected in the invertibility of D. Suppose that F Rd×N is the ∈ dual frame to E used for linear reconstruction, and suppose that x=Fq is the reconstructed approximation to x. Using that FDru=F(y q)=FEx x=x x, it was shown in [12] that the linear reconstruction error of a − − − e stable r-th-order Σ∆ scheme with state variables u can be bounded by e e x x = FDru FDr u N1/2 FDr u . (5) 2 2 2 2 2 2 2 k − k k k ≤k k → k k ≤ k k → k k∞ Here 2 2 denotes the meatrix norm M 2 2 := max Mv 2. In absence of further information about the k·k → k k → kvk2=1k k vectoru(whichis typicallythecase), a reasonablequantizationprocedureshouldyieldgoodboundsforbothnorm estimates on the righthandside of (5). To controlthe factor u , we concentrateon schemesthatare stable, that k k∞ is, there exist constants C >0 and C >0 such that for any N >0 and y RN one has 5 6 ∈ y C = u C . (6) 5 6 k k∞ ≤ ⇒ k k∞ ≤ The constants C and C may depend on the order r, the quantization rule ρ, and the alphabet , but they should 5 6 A not depend on N (and hence not on the oversampling rate λ either). Stability is a crucial concept in the theory of Σ∆quantizationbothforbandlimitedsignals(compare[13])andforframes(seeforexample[6]).Theconstruction of stable schemes that allow for good bounds on C will be discussed in the next section. Now we can bound 6 y = max e ,x max e x . Thus in order to ensure that y C uniformly for all x n n 2 2 5 k k∞ n 1,...,N |h i|≤n 1,...,N k k k k k k∞ ≤ with x ∈{ C , w}e need that m∈a{x } e C5. k k2 ≤ 7 n∈{1,...,N}k nk2 ≤ C7 5 For such a frame E we then seek to find a dual frame F such that FDr is minimized. This is achieved 2 2 k k → by the Sobolev dual introduced in [8]. The r-th-order Sobolev dual frame of a given frame E is given by F :=(D rE) D r. r − † − As desired, F is the left-inverse of E that minimizes the norm FDr over all left inverses F, FE =I (see r 2 2 k k → [8]). Now two approaches are conceivable: On the one hand, one can attempt to design E to yield particularly goodboundsforthis minimum.We willfollowthisapproachin SectionIIIintroducinga class offrameswhere the canonicaldualandthe Sobolevdualcoincide.On the otherhand,one canwork with a givenframe.We willfollow this approach in Section IV, analyzing the bounds for the harmonic frame, as it has been discussed for example in [6]. C. SuperpolynomialSigma-Delta Quantization NotethattheconstantsC andC in(6)dependonr,soasharperanalysiswillrequiretakingthesedependencies 5 6 into account. The first deduction of superpolynomialdecay from explicitly r-dependent bounds for the solution of system (2) was provided in [3] in the context of Σ∆ quantization for bandlimited functions. In [3], the core idea is to choose the order r of the Σ∆ modulator adaptively as a function of the oversampling rate and to choose the underlying quantization rule to be a non-linear function that involves a concatenation of sign functions. In [4], the author derives a framework that allows for stronger error decay rates (exponential in the context of bandlimited functions). The approach is based on an auxiliary sequence v that is defined recursively in terms of n r of its non-subsequentprevious values and an associated linear quantization rule. The optimal error decay in this framework is provided in [5]. More specifically, one formally substitutes u=g v for a given g R 0,...,m for some m r with g =1 and { } 0 ∗ ∈ ≥ chooses the quantization rule in terms of the new variables to be ρ(v ,v ,...,y ,y ,...) = (h v) +y , n n 1 n n 1 n n − − ∗ where h=δ(0) ∆rg with δ(0) the Kronecker delta. Then (2) reads as follows. − q =Q((h v) +y ) (7) n n n ∗ v =(h v) +y q , (8) n n n n ∗ − Note that as (∆rg) = g = 1 and hence h = 0, this formula describes again how v is computed recursively 0 0 0 n from v , j <n. Now by definition of the midrise quantization alphabet δ and its scalar quantizer Q, one has j AK δ 1 |vn|≤max 2,khk1k(vj)jn=−11k∞+kyk∞− K− 2 δ , (cid:18) (cid:18) (cid:19) (cid:19) which inductively shows that v δ, i.e., stability, for all input sequences y with y µ provided that k nk∞ ≤ 2 k k∞ ≤ h δ +µ Kδ. Here denotes the ℓ norm given by v = h . k k12 ≤ k·k1 1 k k1 | j| Stability of thisauxiliaryscheme automaticallyimplies thatthe schePmein the originalvariablesu=g v is also ∗ stable as long as the quantized bits are computed using the v ’s. One estimates n δ u g v g . (9) 1 1 k k∞ ≤k k k k∞ ≤ 2k k 6 These estimates motivate the study of the following optimization problem first posed in [4]. 2µ Minimize g over all g ℓ1 subject to h = ∆rg 1 2K . (10) 1 1 1 k k ∈ k k k k − ≤ − δ To make this problem more tractable, the author restricts the problem to minimally sparse h, i.e., with only r non-zeroentries (albeit distributed over a longer interval).This idea allows for the construction of admissible pairs (g,h) that yield the bound g C Crrr (11) k k1 ≤ 8 9 for some constantsC , C that dependon µ. With the currentlybest-knownconstantsresulting fromthe optimized 8 9 constructions derived in [5], we can summarize these considerations as follows. Proposition2. There exists a universalconstantC >0 such thatfor anymidrise quantizationalphabet = δ , 8 A AK for any order r N, and for all µ< δ K 1 , there exists g Rm for some m > r such that the Σ∆ scheme ∈ − 2 ∈ given in (7) is stable for all input signal(cid:0)s y with(cid:1) y µ and k k∞ ≤ δ u C Crrr , (12) k k∞ ≤ 8 9 2 where u=g v as above and C = π2 e with γ :=2K 2µ. ∗ 9 (cosh−1γ)2 π − δ (cid:16)l m (cid:17) III. SOBOLEV SELF-DUALFRAMES Inthissectionweconstructa familyofframes (r) forRd, parametrizedexplicitlybyanorderr Z,r 1. d,N F ∈ ≥ In particular, for any d,N, and r, we construct frames that admit themselves as both canonical and Sobolev duals of order r. We show that the optimal choice of frames from this family allows for a root-exponential error decay rate (by linear reconstruction) when used for the redundant Σ∆ quantization of signals in Rd. Constructing such frames for r =1 and r >1 will be the focus of the next two subsections, respectively. To that end we now focus on some useful properties of D, defined in (4). Recall that for any matrix M in Rm n of rank k, there exists a singular value decomposition(SVD) of the form × M =U S V ,whereU Rm k isamatrixwithorthonormalcolumns,S Rk k isadiagonalmatrixwith M M M∗ M ∈ × M ∈ × strictlynon-negativeentries,andV Rn k isamatrixwithorthonormalcolumns.Wewilluseanequivalentform M × ∈ of the above factorization, with M =U S V . Here, U Rm m is orthonormal, S Rm k is “diagonal” M M M∗ M ∈ × M ∈ × (thatis, it containsa k k diagonalsubmatrix,with the remainingentriesbeingzero),andV Rn k is a matrix × e e e e M ∈ × with orthonormalcolumns as before. In particular,the differencematrix D admitsa singular value decompositionD =UDSDVD∗ where UD and VD are orthonormal matrices and S is a diagonal matrix given respectively (see [14], [15]) by D 2 2(k 1/2)(N l+1/2)π U (k,l)= cos − − , (13) D sN +1/2 2N +1 (cid:18) (cid:19) 2 2klπ V (k,l)=( 1)k+1 sin , (14) D − sN +1/2 2N +1 (cid:18) (cid:19) 7 lπ S (k,l)=2δ(k,l)cos . (15) D 2N +1 (cid:18) (cid:19) Above, k,l 1,...,N , δ(k,l) is the Kronecker delta, and M(k,l) indicates the entry on the k-th row and l-th ∈ { } column of M. We now briefly summarize how Sobolev self-dual frames arise. Let E and F be dual frames, i.e., FE =I and note that in this section we will design both E and F. Recall that in the context of Σ∆ quantization of redundant frame expansions, we aim to control the error associated with linear reconstruction. Since the above error is given by x FQΣ∆(Ex) FDr u (where QΣ∆ denotes r-th-order Σ∆ quantization), we seek E and 2 2 2 2 k − k ≤ k k → k k F such that FDr is nicely bounded. In particular, it is natural to consider only the Sobolev duals, which 2 2 k k → minimize FDr over all duals of E. With this choice of F, FDr = 1 . On the other hand, k k2→2 k k2→2 σmin(D−rE) for stability considerations we seek E so that Ex is bounded, and thus it is reasonable to restrict our attention 2 k k to tight frames with frame bound 1. With this choice, the expression 1 is minimized when E consists σmin(D−rE) of the right singular vectors of D r correspondingto the largestsingular values. As a result, the Sobolev dual and − the canonical dual of E agree, the frame is Sobolev self-dual. This argument is made precise in Lemma 3 and Theorem 8. A. First-order Sobolev self-dual frames We begin with the construction for the case r =1 and some of its useful properties. Lemma 3. Suppose that E RN d is a frame matrix for Rd, with frame vectors e N given by ∈ × { n}n=1 2 (n 1/2)(d l+1/2)π e (l)= cos − − , l 1,...,d . (16) n sN +1/2 N +1/2 ∈{ } (cid:18) (cid:19) Let F and E be the first order Sobolev dual and canonical dual of E, respectively. Then † (i) E is a tight frame with frame bound 1, (ii) F =E =E , † ∗ (iii) kFDk2→2 =2cos (N2−Nd++11)π . (cid:16) (cid:17) Proof: By definition, E = [u u u ], where u are the columns of U as above. As U is N d+1 N 1 N i D D − |···| − | unitary, the columns are orthonormal, which implies (i). Furthermore, R:=U U RN d is of the form D∗ E ∈ × R(i,j)=δ(N−d−i,j). (17) In other words, the entries of R are zero except on the diagonal of its lowermost square d d submatrix, where × they are 1. Thus, E =R U =E . † ∗ D∗ ∗ 8 To finish the proof of (ii), recall that FD = D−1E †, which directly gives (cid:0) (cid:1) F = VD(SD−1R) †VDSD−1UD∗ (cid:0) (cid:1) = R U =E . ∗ D∗ ∗ To prove (iii) we write FD using the SVDs of F and D to get FD = (R U )(U S V ) ∗ D∗ D D D∗ = (R S )V , ∗ D D∗ which is itself an SVD of FD. Therefore, (N d+1)π FD =2cos − . 2 2 k k → 2N +1 (cid:18) (cid:19) B. Higher order self-dual frames To deal with the case r >1, we examine the properties of Dr. To that end, let Dr =U S V with r 1, Dr Dr D∗r ≥ be the singular value decompositionof Dr, and note thatDr is a Toeplitz matrix. In whatfollows, we will assume that U can been computed (numerically), but we do not provide an explicit expression for its elements. Our Dr technique in generalizing the results of the previous section to the case r 1 will be very similar to the approach ≥ used in the proof of Lemma 3. The main difference is that rather than compute S , we will approximate it by Dr (S )r using Weyl’s inequalities (see, e.g., [16, Thm 4.3.6]) as in [9]. D Theorem 4 (Weyl). Let Σ and ∆ be N N Hermitian matrices with eigenvalues × λ (Σ) λ (Σ) ... λ (Σ) and λ (∆) λ (∆) ... λ (∆). 1 2 N 1 2 N ≥ ≥ ≥ ≥ ≥ ≥ Let λ (X) λ (X) ... λ (X) be the eigenvalues of X =Σ+∆. Then, 1 2 N ≥ ≥ ≥ 1) λ (X) λ (Σ)+λ (∆) j 0,1,2,...,N i i i+j N j ≥ − ∀ ∈{ − } 2) λ (X) λ (Σ)+λ (∆) j 0,1,2,...,i 1 . i i j j+1 ≤ − ∀ ∈{ − } We will apply Weyl’s theorem to Σ = (DD )r, ∆ = (DD )r +DrD r and X = DrD r. This will yield ∗ ∗ ∗ ∗ − estimatesoftheeigenvaluesofDrD r andhenceestimatesofthesingularvaluesofS intermsof(S )r (ther-th ∗ Dr D powersof the singular valuesof D). To that end,we requireestimates of the singularvaluesof DrD r (DD )r. ∗ ∗ − Proposition 5. Let ∆ RN N be as above and let = (i,j) : (i,j) 1,...,r 1,...,r N r + × ∈ I { ∈ { }×{ }∪{ − 1,...,N N r+1,...,N . Then ∆ =0 except possibly when (i,j) . We make no claims over the value i,j }×{ − } ∈I of ∆ when (i,j) . i,j ∈I The proof of this proposition follows trivially from explicitly evaluating Σ and X on c = (i,j) : i i,j i,j I { ∈ 1,...,N ,j 1,...,N ,(i,j) / and noting that they are equal. The details are omitted. In fact, the middle { } ∈ { } ∈ I} 9 N 2r rows of Σ and X form identical matrices. Specifically, the entries in the (r +1)-th row comprise the − coefficients of the polynomial ( 1)r(1 z)2r. The (r+1+j)-th rows, j 0,...,N 2r 1 , are formed by − − ∈ { − − } shifting the coefficients in the (r+1)-th row j times to the right. For a full proof, see [9]. Thus, ∆ has at most 2r non-zero eigenvalues. We make no assumptions about their signs (the ordering of eigenvalues matters in applying Weyl’s inequalities). On the other hand, we are certain that the N 2r middle − eigenvalues are zero. Denoting by λ (M) the j-th largest eigenvalue of a Hermitian matrix M RN N, we are j × ∈ now ready to prove the following proposition. Proposition 6. For D RN N as before and with N >4r, × ∈ λ (DD )r λ (DrD r) λ (DD )r, j =1,...,N. min(j+2r,N) ∗ j ∗ max(j 2r,1) ∗ ≤ ≤ − Proof:Notingthat(DD )r andDrD r areHermitian,usingWeyl’sinequalitieswewillfirstboundthemiddle ∗ ∗ eigenvalues of DrD r. Specifically, ∗ λ ((DD )r) λ (DrD r) λ ((DD )r) j 2r+1,...,N 2r . j+2r ∗ j ∗ j 2r ∗ ≤ ≤ − ∀ ∈{ − } This leaves the largest2r and smallest 2r eigenvalues.We start with the largestonesnoting that λ (DrD r) 2r ∗ ≤ ... λ (DrD r)= Dr 2 by definition. But Dr 2 D 2r =λ ((DD )r), so we have a bound from ≤ 1 ∗ k k2→2 k k2→2 ≤k k2→2 1 ∗ aboveforthelargest2r eigenvalues.NowtoboundthemfrombelowjustapplytherelevantWeylinequalities.This yields λ ((DD )r) λ (DrD r) λ ((DD )r), j 1,...,2r . j+2r ∗ j ∗ 1 ∗ ≤ ≤ ∀ ∈{ } We now turn to the smallest eigenvalues. To that end, recall that for any invertible matrix M RN N, × ∈ λ (MM )=(σ (M))2,whereσ (M)denotesthej-thlargestsingularvalueofM.Moreover,σ (M)=(σ (M 1)) 1. j ∗ j j j N j+1 − − − Now note that (σ (D r))2 = D r 2 D 1 2r =(σ (D 1))2r. We can thus conclude that 1 − k − k2→2 ≤k − k2→2 1 − λ (DrD r) ... λ (DrD r) λ ((DD )r). N 2r+1 ∗ N ∗ N ∗ − ≥ ≥ ≥ We have thus bounded all the smallest eigenvalues from below. To obtain upper bounds, we again use Weyl’s inequalities. This yields λ ((DD )r) λ (DrD r) λ ((DD )r) j N 2r+1,...,N . N ∗ j ∗ j 2r ∗ ≤ ≤ − ∀ ∈{ − } This trivially yields the following bounds on the singular values of Dr, i.e., the diagonal entries of S which Dr we will refer to by σ (Dr),j 1,...,N , where σ (Dr) σ (Dr) σ (Dr). j 1 2 N ∈{ } ≥ ≥···≥ Proposition 7. For D RN N as before and with N >4r, one has × ∈ σ (D)r σ (Dr) σ (D)r, min(j+2r,N) j max(j 2r,1) ≤ ≤ − 10

Description:
mathematical analysis (for the proposed frames and reconstruction method) is [7] B. Bodmann, V. Paulsen, and S. Abdulbaki, “Smooth frame-path
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.