ebook img

Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry PDF

0.39 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry

1 Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry Shu-Tao Xia, Xin-Ji Liu, Yong Jiang, Hai-Tao Zheng Abstract—For a measurement matrix in compressed sensing, [6–8]. The second method considers a convex relaxation of its spark (or the smallest number of columns that are linearly (1), or the l -optimization (basis pursuit) problem, as follows 1 dependent) is an important performance parameter. The matrix [2] with spark greater than 2k guarantees the exact recovery of k- min||x|| s.t. Ax=y, (2) sparse signals under an l -optimization, and the one with large 1 0 3 spark may perform well under approximate algorithms of the where ||x|| (cid:44)(cid:80)n |x | denotes the l -norm of x. Note that 1 l0-optimization. Recently, Dimakis, Smarandache and Vontobel the problem1 (2) cio=u1ld ibe turned into1a linear programming 0 revealedthecloserelationbetweenLDPCcodesandcompressed 2 sensing and showed that good parity-check matrices for LDPC (LP) problem and thus tractable. codes are also good measurement matrices for compressed The construction of measurement matrix A is one of the n sensing. By drawing methods and results from LDPC codes, we main concerns in compressed sensing. In order to select an a study the performance evaluation and constructions of binary J appropriate matrix, we need some criteria. In their earlier measurement matrices in this paper. Two lower bounds of 5 spark are obtained for general binary matrices, which improve and fundamental works, Donoho and Elad [9] introduced the 2 the previously known results for real matrices in the binary concept of spark. The spark of a measurement matrix A, case. Then, we propose two classes of deterministic binary denoted by spark(A), is defined to be the smallest number T] measurement matrices based on finite geometry. Two further of columns of A that are linearly dependent, i.e., improved lower bounds of spark for the proposed matrices are .I given to show their relatively large sparks. Simulation results spark(A)=min{||w||0 :w∈Nullsp∗R(A)}, (3) s show that in many cases the proposed matrices perform better c than Gaussian random matrices under the OMP algorithm. where [ IndexTerms—Compressedsensing(CS),measurementmatrix, Nullsp∗(A)(cid:44){w∈Rn :Aw=0,w(cid:54)=0}. (4) 1 R l -optimization, spark, binary matrix, finite geometry, LDPC 0 v codes, deterministic construction. Furthermore, [9] obtained several lower bounds of spark(A) 2 5 and showed that if 9 I. INTRODUCTION spark(A)>2k, (5) 5 Compressed sensing (CS) [1–3] is an emerging sparse . 1 sampling theory which received a large amount of attention then any k-sparse signal x can be exactly recovered by the l0- 0 in the area of signal processing recently. Consider a k-sparse optimization (1). In fact, we will see in the appendix that the 3 1 signal x = (x1,x2,...,xn)T ∈ Rn which has at most k condition(5)isalsonecessaryforboththel0-optimization(1) : nonzeroentries.LetA∈Rm×n beameasurementmatrixwith and the l1-optimization (2). Hence, the spark is an important v m (cid:28) n and y = Ax be a measurement vector. Compressed performance parameter of the measurement matrix. Other i X sensing deals with recovering the original signal x from the useful criteria include the well-known restricted isometry r measurement vector y by finding the sparsest solution to the property(RIP)[10]andthenullspacecharacterization[12,13]. a undetermined linear system y = Ax, i.e., solving the the Although most known constructions of measurement matrix following l -optimization problem rely on RIP, we will use spark instead in this paper since the 0 spark is simpler and easier to deal with in some cases. min||x|| s.t. Ax=y, (1) 0 Generally,therearetwomainkindsofconstructingmethods where ||x|| (cid:44) |{i : x (cid:54)= 0}| denotes the l -norm or (Ham- for measurement matrices: random constructions and deter- 0 i 0 ming) weight of x. Unfortunately, it is well-known that the ministic constructions. Many random matrices, e.g., Fourier problem(1)isNP-hardingeneral[4].Incompressedsensing, matrices [1], Gaussian matrices, Rademacher matrices [11], there are essentially two methods to deal with it. The first etc, have been verified to satisfy RIP with overwhelming onepursuesgreedyalgorithmsforl -optimization,suchasthe probability. Although random matrices perform quite well 0 orthogonal matching pursuit (OMP) [5] and its modifications on average, there is no guarantee that a specific realization works. Moreover, storing a random matrix may require lots This research is supported in part by the Major State Basic Research of storage space. On the other hand, a deterministic matrix DevelopmentProgramofChina(973Program,2012CB315803),theNational is often generated on the fly, and some properties, e.g., NaturalScienceFoundationofChina(60972011),andtheResearchFundfor theDoctoralProgramofHigherEducationofChina(20100002110033). spark, girth and RIP, could be verified definitely. There are All the authors are with the Graduate School at Shenzhen, many works on deterministic constructions [14–23]. Among Tsinghua University, Shenzhen, Guangdong 518055, P. R. China. these, constructions from coding theory [19–23] attract many E-mail: [email protected], [email protected], [email protected],[email protected] attentions, e.g., Amini and Marvasti [20] used BCH codes to 2 construct binary, bipolar and ternary measurement matrices, bounds on spark ensure that the proposed matrices have and Li et al. [23] employed algebraic curves to generalize the relatively large spark. Simulation results show that the constructionsbasedonReed-Solomoncodes.Inthispaper,we proposed matrices perform well and in many cases sig- usually use A to denote a real measurement matrix and H a nificantlybetterthanthecorrespondingGaussianrandom binary measurement matrix. matrices under the OMP algorithm. Even in the case of Recently, connections between LDPC codes [24] and CS girth 4, some proposed constructions still manifest good excite interest. Dimakis, Smarandache, and Vontobel [21] performance. Moreover, most of the proposed matrices pointed out that the LP decoding of LDPC codes is very could be put in either cyclic or quasi-cyclic form, and similar to the LP reconstruction of CS, and further showed thusthehardwarerealizationofsamplingbecomeseasier that parity-check matrices of good LDPC codes can be used and simpler. as provably good measurement matrices under basis pursuit. Therest ofthepaperis organizedasfollows.In SectionIIwe LDPC codes are a class of linear block codes, each of which give a brief introduction to finite geometries and their parallel is defined by the nullspace over F2 = {0,1} of a binary and quasi-cyclic structures, which result in the two classes of sparse m×n parity-check matrix H. Let I = {1,2,...,n} deterministic constructions naturally. Section III obtains our and J = {1,2,...,m} denote the sets of column indices main results, or the two lower bounds of spark for general and row indices of H, respectively. The Tanner graph GH binary matrices and two further improved lower bounds for [25] corresponding to H is a bipartite graph comprising of theproposedmatricesfromfinitegeometry.Simulationresults n variable nodes labeled by the elements of I, m check and related remarks are given in Section IV. Finally, Section nodes labelled by the elements of J, and the edge set V concludes the paper with some discussions. E ⊆{(i,j):i∈I,j ∈J},wherethereisanedge(i,j)∈E if and only if h =1. The girth g of G , or briefly the girth ji H II. BINARYMATRICESFROMFINITEGEOMETRIES of H, is defined as the minimum length of circles in G . H Obviously, g is always an even number and g ≥4. H is said Finite geometry was used to construct several classes of to be (γ,ρ)-regular if H has the uniform column weight γ parity-checkmatricesofLDPCcodeswhichmanifestexcellent and the uniform row weight ρ. The performance of an LDPC performanceunderiterativedecoding[27][28].Wewillseein thelatersectionsthatmostofthesestructuredmatricesarealso codeunderiterative/LPdecodingoverabinaryerasurechannel good measurement matrices in the sense that they often have is completely determined by certain combinatorial structures, called stopping sets. A stopping set S of H is a subset of considerablylargesparkandmaymanifestbetterperformance I such that the restriction of H to S, say H(S), doesn’t than the corresponding Gaussian random matrices under the OMP algorithm. In this section, we introduce some notations contain a row of weight one. The smallest size of a nonempty stopping set, denoted by s(H), is called the stopping distance and results of finite geometry [28][33, pp. 692-702]. of H. Lu et al. [22] verified that binary sparse measurement Let Fq be a finite field of q elements and Frq be the r- dimensional vector space over F , where r ≥2. Let EG(r,q) matrices constructed by the well-known PEG algorithm [26] q be the r-dimensional Euclidean geometry over F . EG(r,q) significantlyoutperformGaussianrandommatricesbyaseries q has qr points,which arevectors ofFr.The µ-flatin EG(r,q) of experiments. Similar to the situation in constructing LDPC q is a µ-dimensional subspace of Fr or its coset. Let PG(r,q) codes, matrices with girth 6 or higher are preferred in the q bether-dimensionalprojectivegeometryoverF .PG(r,q)is above two works [21, 22]. q In this paper, we manage to establish more connections defined in Frq+1\{0}. Two nonzero vectors p,p(cid:48) ∈Frq+1 are between LDPC codes and CS. Our main contributions focus said to be equivalent if there is λ∈Fq such that p=λp(cid:48). It on the following two aspects. is well known that all equivalence classes of Frq+1\{0} form points of PG(r,q). PG(r,q) has (qr+1−1)/(q−1) points. • Lower bounding the spark of a binary measurement Theµ-flatinPG(r,q)issimplythesetofequivalenceclasses matrix H. As an important performance parameter for contained in a (µ+1)-dimensional subspace of Fr+1. In this LDPC codes, the stopping distance s(H) plays a sim- q paper,inordertopresentaunifiedapproach,weuseFG(r,q) ilar role that the spark does in CS. Firstly, we show to denote either EG(r,q) or PG(r,q). A point is a 0-flat, a that spark(H) ≥ s(H), which again verifies the fact line is a 1-flat, and a (r−1)-flat is called a hyperplane. that good parity-check matrices are good measurement matrices. A special case of the lower bound is the binary corollary of the lower bound for real matrices in A. Incidence Matrix in Finite Geometry [9]. Then, a new general lower bound of spark(H) is For 0 ≤ µ < µ ≤ r, there are N(µ ,µ ) µ -flats 1 2 2 1 1 obtained,whichimprovedthepreviousoneinmostcases. containedinagivenµ -flatandA(µ ,µ ) µ -flatscontaining 2 2 1 2 Furthermore, for a class of binary matrices from finite a givenµ -flat,where for EG(r,q) and PG(r,q) respectively 1 geometry, we give two further improved lower bounds to [28] show their relatively large spark. • Constructingbinarymeasurementmatriceswithrelatively N (µ ,µ ) = qµ2−µ1(cid:89)µ1 qµ2−i+1−1, (6) large spark. LDPC codes based on finite geometry could EG 2 1 qµ1−i+1−1 i=1 befoundin[27,28].Withsimilarmethods,twoclassesof deterministic constructions based on finite geometry are N (µ ,µ ) = (cid:89)µ1 qµ2−i+1−1, (7) given, where the girth equals 4 or 6. The above lower PG 2 1 qµ1−i+1−1 i=0 3 (cid:89)µ2 qr−i+1−1 Similar to the ordering of rows, the columns of H can A (µ ,µ )=A (µ ,µ )= . (8) EG 2 1 PG 2 1 qµ2−i+1−1 also be ordered according to the parallel bundles in EG(r,q). i=µ1+1 Hence, by deleting some row parallel bundles or column par- Let n = N(r,µ1) and J = N(r,µ2) be the numbers of µ1- allel bundles from H, and transposing the obtained submatrix flats and µ2-flats in FG(r,q) respectively. The µ1-flats and if needed, we could construct a large amount of measurement µ2-flats are indexed from 1 to n and 1 to J respectively. The matriceswithvarioussizes.Thesewillbeillustratedbyseveral incidence matrix H =(hji) of µ2-flat over µ1-flat is a binary examples in Section IV. J×n matrix, where hji =1 for 1≤j ≤J and 1≤i≤n if Inthispaper,wecallH orHT thefirstclassofbinarymea- and only if the jth µ -flat contains the ith µ -flat. The rows 2 1 surement matrices from finite geometry, and their punctured of H correspond to all the µ -flats in FG(r,q) and have the 2 versionsthesecondclassofbinarymeasurementmatricesfrom same weight N(µ ,µ ). The columns of H correspond to all 2 1 finite geometry. the µ -flats in FG(r,q) and have the same weight A(µ ,µ ). 1 2 1 Hence, H is a (γ,ρ)-regular matrix, where C. Quasi-cyclic Structure of Hyperplanes γ =A(µ ,µ ), ρ=N(µ ,µ ). (9) 2 1 2 1 Apart from the parallel structure of Euclidean geometry, The incidence matrix H or HT will be employed as mea- most of the incidence matrices in Euclidean geometry and surement matrices and called respectively the type I or type II projective geometry also have cyclic or quasi-cyclic structure finite geometry measurement matrix. Moreover, by puncturing [28]. This is accomplished by grouping the flats of two some rows or columns of H or HT, we could construct a different dimensions of a finite geometry into cyclic classes. large amount of measurement matrices with various sizes. To For a Euclidean geometry, only the flats not passing through obtain submatrices of H or HT with better performance, the the origin are used for matrix construction. Based on this property of parallel structure in Euclidean Geometry are often grouping of rows and columns, the incidence matrix in finite employed as follows. geometryconsistsofsquaresubmatrices(orblocks),andeach of these square submatrices is a circulant matrix in which B. Parallel Structure in Euclidean Geometry each row is a cyclic shift of the row above it and the first row is the cyclic shift of the last row. Note that by puncturing In this class of constructions, an important rule of punctur- the row blocks or column blocks of the incidence matrices, ingtherowsorcolumnsofH orHT istomaketheremained the remained submatrices are often as regular as possible. In submatrix as regular as possible. A possible explanation may otherwords,thisskilliscompatiblewiththeparallelstructure come from Theorem 2 in the next section. This rule can be ofEuclideangeometry.Hence,thesamplingprocesswiththese appliedsincetheEuclideangeometryhastheparallelstructure measurementmatricesiseasyandcanbeachievedwithlinear andallµ -flats(orµ -flat)canbearrangedbyasuitableorder. 2 1 shift registers. For detailed results and discussions, we refer Since a projective geometry does not have the parallel the readers to [28, Appendix A]. structure,weconcentrateonEG(r,q)only.Recallthataµ-flat in EG(r,q) is a µ-dimensional subspace of Fr or its coset. q A µ-flat contains qµ points. Two µ-flats are either disjoint or III. MAINRESULTS they intersect on a flat with dimension at most µ−1. The µ- ThedefinitionofsparkwasintroducedbyDonohoandElad flatsthatcorrespondtothecosetsofaµ-dimensionalsubspace [9] to help to build a theory of sparse representation that of Fr (including the subspace itself) are said to be parallel to q later gave birth to compressed sensing. As we see from (5), each other and form a parallel bundle. These parallel µ-flats sparkofthemeasurementmatrixcanbeusedtoguaranteethe are disjoint and contain all the points of EG(r,q) with each exactrecoveryofk-sparsesignals.Asaresult,whilechoosing point appearing once and only once. The number of parallel measurement matrices, those with large sparks are prefered. µ-flats in a parallel bundle is qr−µ. However, the computation of spark is generally NP-hard [29]. There are totally J =N (r,µ ) µ -flats which consist of EG 2 2 In this section, we give several lower bounds of the spark for K = J/qr−µ2 parallel bundles in EG(r,q). We index these general binary matrices and the binary matrices constructed parallelbundlesfrom1,2,...K.ConsidertheJ×nincidence in section II by finite geometry. These theoretical results matrix H of µ -flat over µ -flat. All J rows of H could be 2 1 guaranteethegoodperformanceoftheproposedmeasurement divided into K bundles each of which contain qr−µ2 rows, matrices under the OMP algorithm to some extent. i.e., by suitable row arrangement, H could be written as   H 1 A. Lower Bounds for General Binary Matrices  H2  H = .. , (10) Firstly, we give a relationship between the spark and stop-  .  ping distance of a general binary matrix. H K For a real vector x ∈ Rn, the support of x is defined by where Hi (1 ≤ i ≤ K) is a qr−µ2 ×n submatrix of H and the set of non-zero positions, i.e., supp(x) (cid:44) {i : xi (cid:54)= 0}. corresponds to the i-th parallel bundles of µ -flat. Clearly, the Clearly, |supp(x)|=(cid:107)x(cid:107) . 2 0 row weight of H remains unchanged and its column weight Traditionally, a easily computable property, coherence, of a i is 1 or 0. matrix is used to bound its spark. For a matrix A ∈ Rm×n 4 with column vectors a ,a ,...,a , the coherence µ(A) 1 is Without loss of generality, we assume that |supp(w+)| ≥ 1 2 n defined by [9]: |supp(w−)|. For fixed j ∈ supp(w+), by selecting the j-th column of H and all the columns in supp(w−) of H, we get |(cid:104)a ,a (cid:105)| µ(A)(cid:44) max i j , (11) asubmatrixH(j).SincethecolumnweightofH isatleastγ, 1≤i(cid:54)=j≤n||ai||2||aj||2 wecouldselectγ rowsofH(j)toformaγ×(1+|supp(w−)|) where (cid:104)ai,aj(cid:105) (cid:44) aTi aj denotes the inner product of vectors. submatrix of H, say H(γ,j), where the column corresponds Furthermore, it is shown in [9] that to j is all 1 column. Now let’s count the total number of 1’s 1 of H(γ,j) in two ways. spark(A)≥1+ . (12) µ(A) • From the view of columns, since the maximum inner Note that this lower bound applies to general real matrices. product of any two different columns of H is λ, each For the general binary matrix H, the next theorem shows of the columns of H(γ,j) corresponds to supp(w−) that the spark could be lower bounded by the stopping dis- has at most λ 1’s. So the total number is at most tance. γ+λ|supp(w−)|. Theorem 1: Let H be a binary matrix. Then, for any w ∈ • From the view of rows, we claim that there is at least Nullsp∗(H), the support of w must be a stopping set of H. two 1’s in each row of H(γ,j), which implies the total R Moreover, number is at least 2γ 1’s. The claim is shown as follows. spark(H)≥s(H). (13) Let h(j) be a row of H(γ,j) and h = (h ,...,h ) be 1 n its corresponding row in H. Note that h = 1. Since Proof: Assume the contrary that supp(w) is not a stop- j w∈Nullsp∗(H), ping set. By the definition of stopping set, there is one row R of H containing only one ‘1’ on supp(w). Then the inner (cid:88) (cid:88) (cid:88) 0= w h = w h + w h , product of w and this row will be nonzero, which contradicts i i i i i i withthefactthatw∈Nullsp∗(A).Hence,spark(A)≥s(A) i∈supp(w) i∈supp(w+) i∈supp(w−) R according to the definitions of stopping distance and spark. which implies that Remark 1: Let H be the m×(2m−1) parity-check matrix of a binary [2m−1,m] Hamming code [33], which consists (cid:88) (cid:88) − w h = w h ≥w >0. of all m-dimensional non-zero column vectors. It is easy to i i i i j check that spark(H) = s(H) = 3, which implies that the i∈supp(w−) i∈supp(w+) lowerbound(13)couldbeachieved.Thislowerboundverifies So there are at least one 1’s in {h :i∈supp(w−)} and again the conclusion that good parity-check matrices are also i h(j) has at least two 1’s. good measurement matrices. In particular, consider a binary m×n matrix H. Suppose Therefore, 2γ ≤ γ + λ|supp(w−)|, which implies that theminimumcolumnweightofH isγ >0,andthemaximum |supp(w−)|≥ γ. Since |supp(w+)|≥|supp(w−)|≥ γ, λ λ innerproductofanytwodifferentcolumnsofH isλ>0.By 2γ (11), we have |supp(w)|=|supp(w+)|+|supp(w−)|≥ λ λ µ(H)≤ . γ and the conclusion follows. Thus the lower bound (12) from [9] implies Remark 2: Note that the matrix in Theorem 2 has girth at γ spark(H)≥1+ . (14) least 6 if λ=1. When γ ≥λ, it is clear that the lower bound λ (15) of Theorem 2 is tighter than (14). Combining Theorem On the other hand, it was proved that s(H)≥1+γ [30, 31]. 2 with (5), we have that any k-sparse signal x can be exactly λ Hence, the bound (14) is a natural corollary of Theorem 1. recovered by the l -optimization (1) if k < γ. 0 λ As a matter of fact, for the general binary matrix H, we Remark 3: Consider a complete graph on 4 vertices. The often have a tighter lower bound of its spark. incidence matrix is Theorem 2: Let H be a binary m×n matrix H. Suppose   theminimumcolumnweightofH isγ >0,andthemaximum 1 1 1 0 0 0 inner product of any two different columns of H is λ > 0.  1 0 0 1 1 0  H = . Then  0 1 0 1 0 1  2γ spark(H)≥ . (15) 0 0 1 0 1 1 λ Proof: For any w = (w1,w2,...,wn)T ∈ Nullsp∗R(H), Clearly,γ =2,λ=1andthelowerbound(15)is4.Moreover, we split the non-empty set supp(w) into two parts supp(w+) itiseasytocheckthat(1,−1,0,0,−1,1)∈Nullsp∗(H)and R and supp(w−), {4,5,6} is a stopping set, which implies that spark(H) =4 supp(w+) (cid:44) {i:w >0}, (16) ands(H)=3.Thisresultcouldbegeneralizedtothecomplete i graph on n nodes. Then spark and stopping distance are also supp(w−) (cid:44) {i:w <0}. (17) i 4 and 3 respectively, which implies that the lower bound (15) of Theorem 2 could be achieved and may be tighter than one 1Throughout this paper, we exclude the situation that µ(A) = 0 or the matrixAhasaall-zerocolumn. in Theorem 1. 5 B. Lower Bounds for Binary Matrices from Finite Geometry most u+|supp(w−)|. On the other hand, suppose x is the number of rows in H(u,j) with weight one. Then, there are Clearly, the lower bound of Theorem 2 for general binary u−x rows with weight at least two. Thus from the view of matricesappliestoallonesconstructedinSectionIIbyFinite rows,thetotalnumberof1’sinH(u,j)isatleastx+2(u−x). Geometry. In this subsection, we will show that for these Hence, x+2(u−x) ≤ u+|supp(w−)|, which implies that structured matrices based on Finite Geometry, more tighter x ≥ u−|supp(w−)| ≥ 1 by the assumption. In other words, lower bound could be obtained. H(j)containsarowwithvalue1atthepositioncorresponding Let H be the J ×n incidence matrix of µ -flats over µ - 2 1 to j and 0 at other positions. Denote this row by h(j) and let flats in FG(r,q), where 0≤µ <µ <r, n=N(r,µ ) and 1 2 1 J = N(r,µ ). Recall that H or HT are called respectively h = (h1,...,hn) be its corresponding row in H. Note that 2 h =1 and h =0,i∈supp(w−). Since w∈Nullsp∗(H), the type-I or tyep-II finite geometry measurement matrix. The j i R following lemma is needed to establish our results. (cid:88) (cid:88) (cid:88) 0 = w h = w h + w h Lemma 1: [32] Let 0 ≤ µ < µ < r and 1 ≤ l ≤ i i i i i i 1 2 i∈supp(w) i∈supp(w+) i∈supp(w−) A(µ ,µ −1). Given any l different µ -flats F ,F ,...,F 2 2 1 1 2 l (cid:88) in FG(r,q) and for any 1≤j ≤l, there exists one (µ2−1)- = wihi ≥wj >0, flat F such that Fj ⊆ F and Fi (cid:54)⊆ F for all i = 1,...,j − i∈supp(w+) 1,j+1,...l. which leads to a contradiction. Therefore, the assumption is Theorem 3: Letr,µ ,µ beintegers,0≤µ <µ <r and 1 2 1 2 wrong and the theorem follows by (8). H be the type-I finite geometry measurement matrix. Then Remark 4: Combining Theorem 3 with (5), we have that when the type-I finite geometry measurement matrix is used, spark(H)≥2A(µ ,µ −1), (18) 2 2 any k-sparse signal x can be exactly recovered by the l - 0 where optimization (1) if k <A(µ ,µ −1). 2 2 qr−µ2+1−1 Remark 5: Forthetype-Ifinitegeometrymeasurementma- A(µ ,µ −1) = . 2 2 q−1 trix H,it isknown thats(H)≥1+A(µ2,µ2−1) [32].Thus, by Theorem 1, Proof: Let u=A(µ ,µ −1) spark(H)≥1+A(µ ,µ −1). (19) 2 2 2 2 and assume the contrary that Obviously,H hasuniformcolumnweightγ =A(µ ,µ ).The 2 1 inner product of two different columns equals to the number spark(H)<2u. of µ -flats containing two fixed µ -flats. It is easy to see that 2 1 Select a w = (w1,w2,...,wn)T ∈ Nullsp∗R(H) such that the maximum inner product is λ = A(µ2,µ1 +1). Thus, by |supp(w)| = spark(H). By (16) and (17), we split the non- Theorem 2, empty set supp(w) into two parts supp(w+) and supp(w−), 2A(µ ,µ ) and assume |supp(w+)|≥|supp(w−)| without loss of gener- spark(H)≥ 2 1 . (20) A(µ ,µ +1) 2 1 ality. Thus by the assumption It is easy to verity by (8) that |supp(w−)|<u or |supp(w−)|≤u−1. 2A(µ ,µ ) 2 1 >1+A(µ ,µ −1) For fixed j ∈ supp(w+), by selecting j-th column of H and A(µ ,µ +1) 2 2 2 1 all the columns in supp(w−) of H, we get a submatrix H(j). 2(qr−µ1 −1) qr−µ2+1−1 The number of columns in H(j) is 1+|supp(w−)| and not ⇔ >1+ qµ2−µ1 −1 q−1 greaterthanu.LetF and{F ,i∈supp(w−)}betheµ -flats j i 1 ⇔ qµ2−µ1(q−2)(qr−µ2 −1)+qr−µ2+1−q >0, corresponding to the columns of H(j). By Lemma 1, there exists one (µ2 −1)-flat F such that Fj ⊆ F and Fi (cid:54)⊆ F where the last inequality always holds because of 1 ≤ µ1 < for all i∈supp(w−). There are exactly u µ2-flats containing µ2 < r and q ≥ 2. This implies that the lower bound (20) is F. Note that among these µ2-flats, any two distinct µ2-flats strictly tighter than the lower bound (19). On the other hand, have no other common points except those points in F (see the lower bound (18) of Theorem 3 is tighter than the lower [28]). Hence, each of these u µ2-flats contains the µ1-flat Fj bound (20). This is because and for any i∈supp(w−), there exist at most one of these u 2A(µ ,µ ) µ -flats containing the µ -flat F . In other words, there exist 2A(µ ,µ −1)≥ 2 1 2 1 i 2 2 A(µ ,µ +1) u rows in H(j) such that each of these rows has component 2 1 1 at position j and for any i∈supp(w−), there exists at most qr−µ2+1−1 qr−µ1 −1 ⇔ ≥ one row that has component 1 at position i. q−1 qµ2−µ1 −1 LetH(u,j)betheu×(1+|supp(w−)|)submatrixofH(j) ⇔ (qr−µ2 −1)(qµ2−µ1 −q)≥0. by choosing these rows, where the column corresponds to j In other words, for 1≤µ <µ <r, the three lower bounds is all 1 column. Now let’s count the total number of 1’s of 1 2 (18), (20), (19) satisfies H(u,j) in two ways. The column corresponds to j has u 1’s whileeachoftheothercolumnshasatmostone1.Thusfrom 2A(µ ,µ ) 2A(µ ,µ −1)≥ 2 1 >1+A(µ ,µ −1), (21) the view of columns, the total number of 1’s in H(u,j) is at 2 2 A(µ ,µ +1) 2 2 2 1 6 where the inequality becomes an equality if and only if µ = Remark 7: For the type-II finite geometry measurement 2 µ +1. matrix HT, it is known that s(HT) ≥ 1 + N(µ + 1,µ ) 1 1 1 Similarly, for the type-II finite geometry measurement ma- [32]. Thus, by Theorem 1, trix, we obtain the following results. spark(HT)≥1+N(µ +1,µ ). (23) Lemma 2: [32] Let 0 ≤ µ < µ < r and 1 ≤ l ≤ 1 1 1 2 N(µ1+1,µ1). Given any l different µ2-flats F1,F2,...,Fl Obviously, HT has uniform column weight γ = N(µ2,µ1). in FG(r,q) and for any 1≤j ≤l, there exists one (µ1+1)- The inner product of two different columns equals to the flat F such that F ⊆ Fj and F (cid:54)⊆ Fi for all i = 1,...,j − number of µ1-flats contained in two fixed µ2-flats at the same 1,j+1,...l. time. It is easy to see that the maximum inner product is Theorem 4: Letr,µ1,µ2 beintegers,0≤µ1 <µ2 <r and λ=N(µ2−1,µ1). Thus, by Theorem 2, HT be the type-II finite geometry measurement matrix. Then 2N(µ ,µ ) spark(HT)≥ 2 1 . (24) spark(HT)≥2N(µ +1,µ ), (22) N(µ −1,µ ) 1 1 2 1 Using the same argument in Remark 5, we have that for 1≤ where for Euclidean geometry (EG) and projective geometry µ <µ <r, the three lower bounds (22), (24), (23) satisfies (PG) respectively 1 2 2N(µ ,µ ) qµ1+1−1 2N(µ +1,µ )≥ 2 1 >1+N(µ +1,µ ), (25) NEG(µ1+1,µ1) = q· q−1 , 1 1 N(µ2−1,µ1) 1 1 qµ1+2−1 where the inequality becomes an equality if and only if µ2 = NPG(µ1+1,µ1) = q−1 . µ1 +1 or µ1 = 0 for Euclidean geometry and µ2 = µ1 +1 for projective geometry, respectively. Proof: Note that the columns of HT are rows of H. Let u=N(µ +1,µ ) IV. SIMULATIONSANDANALYSIS 1 1 In this section, we will give some simulation results on and assume the contrary that the performances for the proposed two classes of binary spark(HT)<2u. measurement matrices from finite geometry. The theoretical results on the sparks of these matrices in last section could Select a w = (w ,w ,...,w )T ∈ Nullsp∗(HT) such that explaintosomeextenttheirgoodperformance.Afterwards,we 1 2 J R |supp(w)| = spark(HT). We split supp(w) into supp(w+) willshowbyexampleshowtoemploytheparallelstructureof and supp(w−), and assume |supp(w+)| ≥ |supp(w−)| with- Euclidean geometry to construct measurement matrices with out loss of generality. Thus flexible parameters. Allthesimulationsareperformedunderthesameconditions |supp(w−)|<u or |supp(w−)|≤u−1. aswith[20,23].Theupcomingfiguresshowthepercentageof perfect recovery ( SNR ≥100dB ) when different sparsity For fixed j ∈ supp(w+), by selecting j-th column of HT rec. ordersareconsidered.Forthegenerationofthek-sparseinput and all the columns in supp(w−) of HT, we get a submatrix signals, we first select the support uniformly at random and HT(j). The number of columns in HT(j) is 1+|supp(w−)| then generate the corresponding values independently by the andnotgreaterthanu.LetF and{F ,i∈supp(w−)}bethe j i standard normal distribution N(0,1). The OMP algorithm µ -flatscorrespondingtothecolumnsofHT(j).ByLemma2, 2 is used to reconstruct the k-sparse input signals from the thereexistsone(µ +1)-flatF suchthatF ⊆F andF (cid:54)⊆F 1 j i compressed measurements and the results are averaged over for all i∈supp(w−). There are exactly u µ -flats contained 1 5000runsforeachsparsityk.FortheGaussianrandommatrix in F. Now, we claim that F contains these u µ -flats and j 1 each entry is chosen i.i.d. from N(0,1). The percentages of F (i∈supp(w−)) contains at most one µ -flat among these i 1 perfect recovery of both the proposed matrix (red line) and u µ -flats. Otherwise, if F (i∈supp(w−)) contains at least 1 i the corresponding Gaussian random matrix (blue line) with two distinct µ -flats among these u µ -flats, then F must 1 1 i the same size are shown in figures for comparisons. contain F since F is the only (µ +1)-flat containing these 1 two distinct µ -flats. This contradicts to the fact that F is not 1 contained in F . Hence, there exist u rows in HT(j) such A. Two Types of Incidence Matrices in Finite Geometry i that each of these rows has component 1 at position j and From Theorem 3 and Theorem 4, the two types of finite for any i ∈ supp(w−), there exists at most one row that has geometry measurement matrices have relatively large sparks component 1 at position i. and thus we expect them to perform well under the OMP. For UsingthesameargumentintheproofofTheorem3,itleads the type-I finite geometry measurement matrix, we expect to toacontradiction.Therefore,theassumptioniswrongandthe recover at least (A(µ ,µ −1)−1)-sparse signals; while for 2 2 theorem follows by (6) and (7). the type-II finite geometry measurement matrix, we expect to Remark 6: Combining Theorem 4 with (5), we have that recover at least (N(µ +1,µ )−1)-sparse signals. 1 1 when the type-II finite geometry measurement matrix is used, Example 1: Let r = 4,q = 2,µ = 3 and µ = 1. 2 1 any k-sparse signal x can be exactly recovered by the l0- The EG(4,2) consists of J = 30 3-flats and n = 120 1- optimization (1) if k <N(µ1+1,µ1). flats. Let H be the incidence matrix of 3-flat over 1-flat in 7 1 1 Rnd 30×120 Rnd 343×2793 0.9 EG(4, 2) 0.9 EG(3, 7) 0.8 0.8 Perfect Recovery Percentage 00000.....34567 Perfect Recovery Percentage 00000.....34567 0.2 0.2 0.1 0.1 0 0 5 10 15 0 Sparsity Order: k 60 70 80 90 100 110 120 130 140 Sparsity Order: k Fig. 1. Perfect recovery percentages of a type-I Euclidean geometry mea- surement matrixin EG(4,2) with µ1 =1, µ2 =3 and thecorresponding Fig.3. Perfectrecoverypercentagesofatype-IIfinitegeometrymeasurement Gaussianrandommatrix. matrix inEG(3,7) with µ1 =0, µ2 =1 and thecorresponding Gaussian randommatrix.Thestepsizeofk is4. 1 Rnd 85×357 1 0.9 PG(3, 22) Rnd 584×4672 0.9 EG(3, 23) 0.8 0.8 0.7 Perfect Recovery Percentage 0000....3456 Perfect Recovery Percentage 00000.....34567 0.2 0.2 0.1 0.1 0 0 10 15 20 25 30 35 40 80 100 120 140 160 180 200 220 240 Sparsity Order: k Sparsity Order: k Fig. 2. Perfect recovery percentages of a type-II projective geometry Fig.4. Perfectrecoverypercentagesofatype-Ifinitegeometrymeasurement measurement matrix in PG(3,22) with µ1 = 0, µ2 = 1 and the matrixinEG(3,23)withµ1=1, µ2=2andthecorrespondingGaussian correspondingGaussianrandommatrix. randommatrix.Thestepsizeofk is6. EG(4,2). Then H is a 30×120 type-I Euclidean geometry Moreover, spark(HT) ≥ 2NPG(1,0) = 10 by Theorem 4. measurementmatrix.H hasgirth4andis(γ,ρ)-regular,where It is observed from Fig. 2 that HT performs better than the γ = A (3,1) = 7 and ρ = N (3,1) = 28. Moreover, Gaussian random matrix, and the sparsity order with exact EG EG spark(H)≥2A (3,2)=6 according to Theorem 3. From recovery may exceed the one ensured by the proposed lower EG Fig. 1, it is easy to find that the performance of the proposed bound. For k < 10, exact recovery is obtained and the matrix is better than that of the Gaussian random matrix. In corresponding points are not plotted for clear comparisons, particular,forallsignalswithsparsityorderk <A (3,2)= and the similar methods are used in the following figures. EG 3therecoveryareperfect.Thisexampleshowsthatsomegirth Example 3: Let r = 3, q = 7, µ = 1, and µ = 0. 2 1 4 matrices from finite geometry could also perform very well. The EG(3,7) consists of J = 2793 lines and n = 343 Example 2: Let r = 3,q = 22,µ = 1 and µ = 0. The points, and H is the 2793×343 incidence matrix of line over 2 1 PG(3,22) consists of J = 357 lines and n = 85 points point in EG(3,7). Then HT is a type-II Euclidean geometry and H is the 357×85 incidence matrix of line over point measurement matrix. HT has girth 6 and is (γ,ρ)-regular, in PG(3,22). Then HT is an 85 × 357 type-II projective where γ = NEG(1,0) = 7 and ρ = AEG(1,0) = 57. geometry measurement matrix. HT has girth 6 and is (γ,ρ)- Moreover, spark(H) ≥ 2NEG(2,1) = 14 by Theorem 4. regular, where γ =N (1,0)=5 and ρ=A (1,0)=21. Note that the step size of sparsity order k is 4. PG PG 8 1 1 Rnd 64×256 Rnd 192×1024 0.9 Use 4 bundles 0.9 Use 6 bundles Rnd 80×256 Rnd 256×1024 0.8 Use 5 bundles 0.8 Use 8 bundles Rnd 96×256 Rnd 320×1024 Perfect Recovery Percentage 00000.....34567 URUsnseed 671 1bb2uu×nn2dd5ll6eess Perfect Recovery Percentage 00000.....34567 URUsnseed 113028 4bb×uu1nn0dd2ll4eess 0.2 0.2 0.1 0.1 0 10 20 30 40 50 60 70 00 50 100 150 200 250 Sparsity Order: k Sparsity Order: k Fig.5. Perfectrecoverypercentagesof4submatricesofatype-Imeasurement Fig.6. Perfectrecoverypercentagesof4submatricesofatype-Imeasurement matrixinEG(2,24)withµ1=0, µ2=1andtheircorrespondingGaussian matrixinEG(2,25)withµ1=0, µ2=1andtheircorrespondingGaussian randommatrices.Therowsofthe4submatricesfromlefttorightarechosen randommatrices.Therowsofthe4submatricesfromlefttorightarechosen according to the first 4, 5, 6 and 7 parallel bundles of lines in EG(2,24), accordingtothefirst6,8,10and12parallelbundlesoflinesinEG(2,25), respectively.Thestepsizeofk is2. respectively.Thestepsizeofk is8. Example 4: Let r = 3, q = 23, µ = 2, and µ = 1. 2 1 the better the submatrix performs, and its gain over the The EG(3,23) consists of J = 584 2-flats and n = 4672 corresponding Gaussian random matrix becomes larger. 1-flats. Let H be the 584 × 4672 incidence matrix of 2- Example 6: Let r = 2,q = 25,µ = 1 and µ = 0. The flat over 1-flat in EG(3,23). Then H is a type-I Euclidean 2 1 Euclidean plane EG(2,25) consists of n = 1024 points and geometry measurement matrix. H has girth 6 and is (γ,ρ)- J = 1056 lines. Let H be the J ×n incidence matrix. All regular, where γ =A (2,1)=9 and ρ=N (2,1)=72. EG EG the 1056 lines can be divided into 1056/32 = 33 parallel Moreover,spark(H)≥2A (2,1)=18byTheorem3.Fig. EG bundles and each bundle consists of 32 lines. By (10), H = 4 shows that some matrices from finite geometry have very (HT,HT,...,HT)T,whereH consistsofthe32linesinthe good performance for the moderate length of input signals 1 2 33 i i-th parallel bundle. By choosing the first γ parallel bundles, (about 5000). we get an m×n measurement matrix with uniform column weight γ. Fig. 6 shows the performance of the 192×1024, B. Using Parallel Structure to Obtain Measurement Matrices 256×1024, 320×1024, 384×1024 submatrices of H which with Flexible Sizes correspondtothefirst6,8,10and12parallelbundlesoflines Parallel structure of Euclidean geometry is very useful to in EG(2,25), respectively. From Fig. 6 it is observed that obtain various measurement matrices. Next, we show how to all of the submatrices perform better than their corresponding puncturerowsorcolumnsfromtheincidencematrixH orHT Gaussian random matrices, and the more parallel bundles are by several examples. chosen, the better the submatrix performs, and its gain over Example 5: Let r = 2,q = 24,µ = 1 and µ = 0. The the corresponding Gaussian random matrix becomes larger. 2 1 Euclidean plane EG(2,24) consists of n = 256 points and Example 7: Consider the 320 × 1024 submatrix in J =272 lines. Let H be the J×n incidence matrix. Since J EG(2,25), say H , in the last example. We will puncture its b is close to n, both H and HT are not suitable to be measure- columnstoobtainmoremeasurementsubmatrices.Recallthat ment matrices directly. However, according to the the parallel H = (HT,HT,...,HT)T and the first 10 submatrices are 1 2 33 structureofH describedinSectionII,allthe272linescanbe chosen to obtain H = (HT,HT,...,HT)T. For the fixed b 1 2 10 divided into 272/16 = 17 parallel bundles and each bundle submatrix H , its corresponding 32 lines are paralleled to 11 consists of 16 lines. By (10), H = (HT,HT,...,HT)T, each other and partition the geometry. Hence, when selecting 1 2 17 where for i = 1,...,17, H consists of the 16 lines in the the first j lines from H , the points on these j lines are i 11 i-th parallel bundle. By choosing the first γ submatrices H , different pairwise and the total number of points is 32j since i we get an m×n measurement matrix with uniform column each line contains 32 points. By deleting the 32j columns weight γ. corresponding to these 32j points from H , we obtain a b Fig. 5 shows the performance of the 64×256, 80×256, 320×(1024−32j) submatrix, where is still regular. 96×256, 112×256 submatrices of H which correspond to The 4 red lines from left to right in Fig. 7 show the the first 4, 5, 6 and 7 parallel bundles of lines in EG(2,24), performanceofthe320×1024,320×896,320×768,320×640 respectively. From Fig. 5 we can see that all of the proposed submatrices of H which correspond that j = 0,4,8,12, b submatrices perform better than their corresponding Gaussian respectively. It is observed that all of the submatrices perform random matrices, and the more parallel bundles are chosen, betterthantheircorrespondingGaussianrandommatrices(the 9 1 The future works may include giving more lower or upper Rnd 320×1024 bounds of sparks for general binary measurement matrices, 0.9 All points Rnd 320×896 determining the exact value of sparks for some classes of 0.8 DRenld. 132280 ×p7o6i8nts measurement matrices, and constructing more measurement Del. 256 points matrices with large sparks. e 0.7 Rnd 320×640 g nta Del. 384 points erce 0.6 APPENDIX P ery 0.5 PROOFSOFNECESSITY ecov For convenience and clear statement, we write the results R 0.4 ect to the following propositions. Perf 0.3 Proposition 1: For positive integer k, any k-sparse signal x can be exactly recovered by the l -optimization (1) if and 0 0.2 only if spark(A)>2k. 0.1 Proof: We only need to show the necessity. Clearly, the measurement matrix A does not have an all-0 column, 08 0 90 100 110 120 130 140 150 160 170 180 which implies that spark(A) ≥ 2. Assume the contrary that Sparsity Order: k spark(A) ≤ 2k. Select a w = (w ,...,w ) ∈ Nullsp∗(A) 1 n R such that (cid:107)w(cid:107) = spark(A). Let a = (cid:98)spark(A)/2(cid:99), where Fig. 7. Perfect recovery percentages of 4 submatrices of Hb and their 0 corresponding Gaussian random matrices, where Hb is the 320 × 1024 (cid:98)·(cid:99) is the floor function. Then submatrixofH inExample6.The4submatrices(theredlinesfromleftto right)areobtainedbydeleting0,128,256,384columnsofHb,respectively. 1≤a≤k, 1≤spark(A)−a≤k. Thestepsizeofk is4. Let b be the a-th non-zero position of w and set x(cid:48) =(w ,...,w ,0,...,0). 1 b 4 blue lines from left to right), but its gain becomes slightly smaller when more columns are deleted. Let x=x(cid:48)−w. Clearly, (cid:107)x(cid:48)(cid:107) =a, (cid:107)x(cid:107) =spark(A)−a, and (cid:107)x(cid:48)(cid:107) ≤(cid:107)x(cid:107) . 0 0 0 0 V. CONCLUSIONS Inotherwords,bothxandx(cid:48) arek-sparsevectors,andx(cid:48) may In this paper, by drawing methods and results from LDPC be sparser. However, since Aw=0, we have that codes, we study the performance evaluation and deterministic constructions of binary measurement matrices. The spark Ax(cid:48) =Ax(cid:48)−Aw=Ax, criterionisusedbecauseitssimilaritytothestoppingdistance which implies that x can not be exactly recovered by the l - of an LDPC code and the fact that a matrix with large spark 0 optimization (1). This finishes the proof. may perform well under the approximate algorithms of l - 0 Proposition 2: For positive integer k, any k-sparse signal optimization, e.g., the well-known OMP algorithm. Lower x can be exactly recovered by the l -optimization (2) only if bounds of spark were proposed for real matrices in [9] 1 spark(A)>2k. many years ago. When the real matrices are changed to Proof: It is known [12, 13, 21] that any k-sparse signal binary matrices, better results may emerge. Firstly, two lower x can be exactly recovered by the l -optimization (2) if and bounds of spark are obtained for general binary matrices, 1 only if A satisfies the so-called nullspace property, or for any which improve the one derived from [9] in most cases. Then, w∈Nullsp∗(A) and any K ⊆{1,2,...,n} with |K|=k, we propose two classes of deterministic binary measurement R matrices based on finite geometry. One class is the incidence ||wK||1 <||wK¯||1 (26) matrix H of µ -flat over µ -flat in finite geometry FG(r,q) or its transpose2 HT, which1 are called respectively the type where K¯ ={1,2,...,n}\K. Assume the contrary that spark(A) ≤ 2k. By selecting I or type II finite geometry measurement matrix. The other classisthesubmatricesofH orHT,especiallythoseobtained a w = (w1,...,wn) ∈ Nullsp∗R(A) such that (cid:107)w(cid:107)0 = spark(A),itiseasytoseethatwdoesnotsatisfy(26)forsome by deleting row parallel bundles or column parallel bundles from H or HT in Euclidean geometry. In this way, we k-subset K, e.g., letting K be the set of positions with the largest k |w |’s. This leads to a contradiction, which implies could construct a large amount of measurement matrices with i the conclusion. various sizes. Moreover, most of the proposed matrices have cyclic or quasi-cyclic structure [28] which make the hardware realization convenient and easy. For the type I or II finite REFERENCES geometry measurement matrix, two further improved lower [1] E. J. Cand`es, J. Romberg, and T. Tao, “Robust uncer- bounds of spark are given to show their relatively large spark. tainty principles: exact signal reconstruction from highly Finally, a lots of simulations are done according standard incompletefrequencyinformation,”IEEETrans.Inf.The- and comparable procedures. The simulation results show that ory, vol. 52, no. 2, pp. 489–509, Feb. 2006. in many cases the proposed matrices perform better than [2] E. J. Cand`es and T. Tao,“Near-optimal signal recovery Gaussian random matrices under the OMP algorithm. fromrandomprojections:universalencodingstrategies?” 10 IEEETrans.Inf.Theory,vol.52,no.12,pp.5406–5425, of sparse signals,” in Proc. ACM-SIAM Symp. Discrete Dec. 2006. Algorithms, 2008, pp. 30–33. [3] D. L. Donoho,“Compressed sensing,” IEEE Trans. Inf. [19] S.D.Howard,A.R.Calderbank,andS.J.Searle,“Afast Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006. reconstruction algorithm for deterministic compressive [4] M. R. Garey and D. S. Johnson, Computers and in- sensingusingsecondorderReed-Mullercodes,”inProc. tractability: A guide to the theory of NP-completeness, 42nd Ann. Conf. Information Sciences and Systems , SanFrancisco,CA:W.H.FreemanandCompany,1979. Princeton, NJ, USA, 2008, pp. 11–15. [5] J. Tropp and A.C. Gilbert, “Signal recovery from par- [20] A. Amini and F. Marvasti,“Deterministic construction of tial information via orthogonal matching pursuit,” IEEE binary, bipolar, and ternary compressed sensing matri- Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, Dec. ces,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2360– 2007. 2370, Apr. 2011. [6] D. Needell and J. Tropp, “CoSaMP: Iterative signal [21] A. G. Dimakis, R. Smarandache, and P. O. Vonto- recoveryfromincompleteandinaccuratesamples,”Appl. bel,“LDPC codes for compressed sensing,” IEEE Trans. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301–321, Inf. Theory, vol. 58, no. 5, pp. 3093–3114, May 2012. 2009. [22] W. Z. Lu, K. Kpalma and J. Ronisn, “Sparse binary [7] D. Needell and R. Vershynin, “Uniform uncertainty matrices of LDPC codes for compressed sensing,” in principle and signal recovery via regularized orthogonal Data Compression Conference, Snowbird, Utah, USA, matching pursuit,” Found. Comput. Math., vol. 9, no. 3, Apr. 2012, pp. 405–405. pp. 317–334, 2009. [23] S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic con- [8] W. Dai and O. Milenkovic, “Subspace pursuit for com- struction of compressed sensing matrices via algebraic pressive sensing signal reconstruction,” IEEE Trans. Inf. curves,” IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. Theory, vol. 55, no. 5, pp. 2230–2249, May 2009. 5035–5041, Apr. 2012. [9] D. L. Donoho and M. Elad,“Optimally sparse repre- [24] R. G. Gallager, “Low density parity check codes”, IRE sentation in general (nonorthogonal) dictionaries via l1 Trans. Inf. Theory, vol. IT-8, pp. 21–28, Jan. 1962. minimization,”Proc.Nat.Acad.Sci.,vol.100,no.5,pp. [25] R. M. Tanner, “A recursive approach to low complexity 2197–2202, 2003. codes,” IEEE Trans.Inf. Theory, vol.27, no.5, pp. 533– [10] E. J. Cand`es and T. Tao,“Decoding by linear program- 547, Sep. 1981. ming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. [26] X.Y.Hu,E.Eleftheriou,andD.M.Arnold,“Regularand 4203–4215, Dec. 2005. irregular progressive edge-growth Tanner graphs,” IEEE [11] R. Baraniuk, M. Davenport, R. DeVore and M. Wakin, Trans.Inf.Theory,vol.51,no.1,pp.386–398,Jan.2005. “A simple proof of the restricted isometry property for [27] Y. Kou, S. Lin, and M. Fossorier, “Low-density parity- random matrices,” Constr. Approx., vol. 28, no. 3, pp. check codes based on finite geometries: A rediscovery 253–263, 2008. and new results,” IEEE Trans. Inf. Theory, vol. 47, no. [12] W. Xu and B. Hassibi, “Compressed sensing over the 7, pp. 2711–2736, Nov. 2001. Grassmann manifold: A unified analytical framework,” [28] H. Tang, J. Xu, S. Lin, and K. A. S. Abdel-Ghaffar, inProc.46thAllertonConf.Commun.,Control,Comput., “Codes on finite geometries,” IEEE Trans. Inf. Theory, Monticello, IL, Sep. 2008, pp. 562–567. vol. 51, no. 2, pp. 572–596, Feb. 2005. [13] M.Stojnic,W.Xu,andB.Hassibi,“Compressedsensing- [29] M. F. Duarte, and Y. C. Eldar,“Structured compressed probabilisticanalysisofanull-spacecharacterization,”in sensing: From theory to applications,” IEEE Trans. Sig- Proc. IEEE Int. Conf. Acoust., Speech Signal Process, nalProcessing,vol.59,no.9,pp.4053–4085,Sep.2011. LasVegas, NV, Mar.31-Apr.4, 2008, pp. 3377–3380. [30] N. Kashyap and A. Vardy, “Stopping sets in codes [14] L. Applebauma, S. D. Howardb, S. Searlec, and R. from designs,” in Proc. IEEE Int. Symp. Inf. Calderbank, “Chirp sensing codes: Deterministic com- Theory, Yokohama, Japan, June 29-July 4, 2003, pressed sensing measurements for fast recovery,” Appl. p. 122. The full version is available online via Comput. Harmon. Anal., vol. 26, no. 2, pp. 283–290, http://www.mast.queensu.ca/∼nkashyap/Papers/stopsets.pdf. Mar. 2009. [31] S. T. Xia and F. W. Fu, “Minimum pseudoweight [15] M. A. Iwen, “Simple deterministically constructible RIP and minimum pseudocodewords of LDPC codes,” IEEE matrices with sublinear fourier sampling requirements,” Trans.Inf.Theory,vol.54,no.1,pp.480–485,Jan.2008. in Proc. 43rd Ann. Conf. Information Sciences and Sys- [32] S.T.XiaandF.W.Fu,“Onthestoppingdistanceoffinite tems , Baltimore, MD, USA, 2009, pp. 870–875. geometry LDPC codes,” IEEE Communications Letters, [16] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin and D. vol. 10, no. 5, pp. 381–383, May 2006. Kutzarova, “Explicit constructions of RIP matrices and [33] F. J. MacWilliams and N. J. A. Sloane, The Theory of relatedproblems,”DukeMath.J.,vol.159,no.1,pp.145– Error-Correcting Codes. Amsterdam, The Netherlands: 185, 2011. North-Holland, 1981 (3rd printing). [17] R. A. DeVore, “Deterministic constructions of com- pressed sensing matrices,” J. Complexity, vol. 23, pp. 918–925, 2007. [18] P. Indyk, “Explicit constructions for compressed sensing

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.