ebook img

Greedy Sparse Signal Reconstruction Using Matching Pursuit Based on Hope-tree PDF

0.41 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Greedy Sparse Signal Reconstruction Using Matching Pursuit Based on Hope-tree

IEEESIGNALPROCESSINGLETTERS 1 Greedy Sparse Signal Reconstruction Using Matching Pursuit Based on Hope-tree Zhetao Li, Hongqing Zeng, Chengqing Li, and Jun Fang Abstract—The reconstruction of sparse signals requires the (OMP), the index of the column possessing the strongest solution of an (cid:96)0-norm minimization problem in Compressed correlation with the modified measurements is chosen as the Sensing. Previous research has focused on the investigation of new element of the support in each iteration [6]. Note that a single candidate to identify the support (index of nonzero if any index in the support set is not right, the result of elements)ofasparsesignal.Toensurethattheoptimalcandidate 7 can be obtained in each iteration, we propose here an iterative OMP would be wrong. To mitigate this weakness of OMP, 1 greedy reconstruction algorithm (GSRA). First, the intersection improved selection methods for indices have been developed, 0 ofthesupportsetsestimatedbytheOrthogonalMatchingPursuit such as Stagewise Orthogonal Matching Pursuit (StOMP) [7], 2 (OMP) and Subspace Pursuit (SP) is set as the initial support which adopts a threshold mechanism. Subspace pursuit (SP) n set. Then, a hope-tree is built to expand the set. Finally, a andCompressiveSamplingMatchingPursuit(CoSaMP)select a developed decreasing subspace pursuit method is used to rectify J the candidate set. Detailed simulation results demonstrate that indices exceeding the sparsity level by a pruning [8], [9]. GSRAismoreaccuratethanothertypicalmethodsinrecovering Multipath Matching Pursuit (MMP) builds a tree to find the 1 1 Gaussian signals, 0–1 sparse signals, and synthetic signals. optimal support set [10]. Generalized Orthogonal Matching Index Terms—Compressed Sensing, greedy reconstruction al- Pursuit (gOMP) selects multiple indices [11]. However, these ] gorithm, hope-tree, matching pursuit, sparse signal recovery. algorithms cannot obtain an acceptable trade-off behavior T between computational complexity and reconstruction perfor- I . mance. s I. INTRODUCTION c In this letter, we propose an iterative greedy signal recon- IN the past decade, Compressed Sensing (CS) has received [ struction algorithm (GSRA) using matching pursuit based on considerable attention as a means to reconstruct sparse 1 hope-tree.First,theintersectionofthesupportsetsisestimated signals in underdetermined systems. The basic premise of CS v by OMP and SP and taken as the initial support set. Then, a is that a K-sparse signal x ∈ Rn can be recovered perfectly 2 hope-tree is built to obtain a candidate support S by setting 2 with far fewer compressed measurements y = Φx ∈ Rm the search depth, which is rectified by a decreasing subspace 9 than the traditional approaches [1], [2], [3], [4]. The problem pursuit method. Finally, we calculate that the complexity of 2 of reconstructing the original signal can be formulated as an 0 (cid:96) -minimization problem GSRA is O(Nmax · mn · iter), where iter is the number 1. 0 of iterations and Nmax is the number of candidates in each 70 mxin(cid:107)x(cid:107)0 subject to y =Φx, (1) iteTrahtieonre.stofthispaperisorganizedasfollows.InSec.II,we 1 where (cid:107)•(cid:107) denotes the pth norm, Φ ∈ Rm×n is the sensing introduce the matching pursuit used in the GSRA. In Sec. III, p v: matrix,andm<n.Sincethe(cid:96)0-minimizationproblemisNP- we provide the efficiency analysis of the GSRA. Section IV i hard[1],itisoftentransformedintoan(cid:96)1-minimizationprob- presents detailed empirical experiments on its reconstruction X lemtomakeittractable.Concretely,some(cid:96)1-relaxationmeth- performance. The last section concludes the paper. r ods,suchasBasisPursuit(BP)andBPdenoising(BPDN),can a be used to solve it [5]. II. MATCHINGPURSUITBASEDONHOPE-TREE To greatly reduce the computational complexity of the The proposed GSRA is composed of three stages: pre- (cid:96) -relaxation methods, algorithms based on greedy pursuit 1 selection, hope-tree search, and rectification. These are de- have been adopted, which use the least squares method to scribed in the following three subsections. estimate the sparse signal x by finding the positions of its nonzero elements. The sparse signal can be represented as (cid:80) A. Pre-selection x = c e , where the e are the standard unit vectors j∈S j j j andS isthesupportsetofx.InOrthogonalMatchingPursuit The purpose of pre-selection is to obtain the indices be- longing to the target support set T with a high probability. This work was supported by the National Natural Science Foundation of TheinitialsupportsetΘcanbetakentobetheintersectionof China(no.61532020,61379115). the support sets estimated by different types of reconstruction Z. Li, H. Zeng, and C. Li are with the College of Information En- gineering, Xiangtan University, Xiangtan 411105, Hunan, China (e-mail: algorithms,suchasOMP,SPandgOMP.Ifthreeormoresuch [email protected]). are used, the reconstruction performance cannot be improved J.FangiswiththeNationalKeyLaboratoryofScienceandTechnologyon further, but the whole complexity is greatly increased. As the Communications,UniversityofElectronicScienceandTechnologyofChina, Chengdu611731,China(e-mail:[email protected]). index selection strategy of OMP is different from the others, 1549-8328(cid:13)c2016IEEE.Personaluseispermitted,butrepublication/redistributionrequiresIEEEpermission. Seehttp://www.ieee.org/publicationsstandards/publications/rights/index.htmlformoreinformation. IEEESIGNALPROCESSINGLETTERS 2 weonlyadoptOMPandSP,soastosatisfythecomputational selected support with the indices of the T largest magnitude efficiency requirement. entries of ΦTr. Once the size of the extended support Sk reachesT+K,theorthogonalprojectioncoefficientsofyonto Φ are calculated, and the estimated support Sk is obtained B. Hope-Tree search Sk by pruning it to contain only the K indices corresponding to Once the pre-selection is finished, a hope-tree is built to the largest elements of Φ† y, where k is the iteration index. search for the candidate support. As shown in Fig. 1, the Sk In the next iteration, the length of the search space decreases search depth is set as N in the kth iteration, and each path according to T = αT , while the execution remains the k+1 k generates L (≤ K) child paths with indices π,π,··· ,π of (cid:101) (cid:101) (cid:101)L same as in the first iteration, where 0 < α < 1. When those columns possessing the strongest correlations with the T becomes smaller than a certain threshold, the iteration is residual, according to the condition terminated. The final support set S and the estimated version {π(cid:101)1,π(cid:101)2,··· ,π(cid:101)L}=argmax(cid:13)(cid:13)(ΦTrik−1)π(cid:13)(cid:13)22. (2) ofTthheespigsenuadloxcoadree odfetGerSmRinAedi.s summarized as Algorithm 1, |π|=L wheretheprocessofbuildingthehope-treeandthesubprocess When the search depth approaches N, a sub-tree of the hope- of rectifying the candidate support set are summarized as tree is built. The modulo strategy in [10] is used to compute Algorithm 2 and Algorithm 3, respectively. The main merit the layer order, and execute a priority search for promising of the reconstruction algorithm of GSRA is that it creates a candidates. Then, the indices of the optimal paths with the processofsearchingthehope-treeandrectifyingthecandidate minimum residuals are selected as the best elements of T and support set, which can balance the contradiction between added to the set Θ. Next, the last optimal index of the kth computational complexity and reconstruction performance. iterationischosenastherootindex.Theaboveproceduresare Algorithm 1: Matching Pursuit Based on Hope-Tree. repeatedinthe(k+1)thiteration.Thetreesearchprocedureis stopped when the length of the support Θ is not smaller than Data: measurement vector y, Sensing Matrix Φ, sparsity K. In each sub-process, the number of candidates increases level K, number of path L, search depth N by the factor L, resulting in LN candidates after N iterations. Jomp =OMP(Φ,y,K), // OMP support set Theseoperationsarerepeateduntileitherthecandidatesupport Jsp =SP(Φ,y,K), // SP support set satisfies S = Θ or the (cid:96)2-norm of the residual falls below a initial candidate support set λ=Jomp∩Jsp. preset threshold, i.e., (cid:13)(cid:13)rk(cid:13)(cid:13) ≤ς. Result: x(cid:98), S 2 Initialization: Residual vector r0 =y−Φx , candidate support λ Λ0 =λ, backtracking subspace parameter α N layers (0<α<1), iteration index k =0 while (cid:107)Λ(cid:107) <K do } 0 k-th iteration k =k+1 ... [supp,∼] = CreateHopeTree(y,Φ,rk,K,N,L) Update candidate support: Λk =Λk−1∪supp 2N layers Estimate x : x =Φ† y } (cid:98)Λk (cid:98)Λk Λk (k+1)-th iteration Update residual: rk =y−ΦΛx(cid:98)Λk ... end [x, S]=RectifySupport(Λ,y,Φ,r ,α) (cid:98) new ... III. EFFICIENCYANALYSIS K*N layers A. Parameter setting AsensingmatrixΦissaidtosatisfytherestrictedisometry ... property (RIP) of order K if there exists a constant δ ∈[0,1] such that (1−δ)(cid:107)x(cid:107)2 ≤(cid:107)x(cid:107)2 ≤(1+δ)(cid:107)x(cid:107)2 (3) Fig.1. Illustrationofthehope-treesearch,wheretheblackdotsdenotethe 2 2 2 optimalcandidate. for any K-sparse vector x. In particular, the minimum of all constants δ ensuring that the correct indices can be obtained is referred to as the isometry constant δ . In the clean case, K C. Rectification it is assumed that Φ satisfies the RIP of order NK, i.e., Because searching the hope-tree may ignore any correct δ ∈ (0,1). So, one has 0 < 1 − δ ≤ λ (ΦTΦ ) NK NK min D D indicesfallingoutsideofthehope-tree,adevelopeddecreasing for any index set N satisfying |D| ≤ NK, which indicates subspacepursuitmethodisusedtoobtaintheindicesfromthe that all eigenvalues of Φ are positive. Thus, Φ has full D √ D latter and improve the accuracy of the candidate support set. column rank (i.e., K ≤ m/N) and N ≤ m. Therefore, √ In the first iteration, the size of the search space is T =K, one has N < min(K, m). However, in the noisy case, a and the decreasing subspace pursuit method expands the noisy disturbance makes the GSRA improve its performance IEEESIGNALPROCESSINGLETTERS 3 Algorithm2:[supp,∼]=CreateHopeTree(y,Φ,r,K,N,L) ofsearching thehope-treeis boundedby O(Nmax·mn·iter). In addition, the number of selecting elements is reduced in Data: Initialization: maximum number of search each iteration; the complexity of the decreasing subspace candidate N , stop threshold ς, candidate order max methodisboundedbyO(mn).Therefore,thetotalcomplexity (cid:96)=0, minimal magnitude of residual ρ=∞ of GSRA is O(N ·n·iter)+O(mn ·K)+O(mn) ≈ Result: supp max O(N · mn · iter). When the value of K is low, the while (cid:96)<N and ς <ρ do max max (cid:96)=(cid:96)+1, r0 =r, temp =(cid:96)−1 complexity of GSRA is close to that of OMP. In addition, the for k =1 to N do complexity of GSRA monotonously increases with respect to c =tempmodL+1 // compute layer order K and far less than that of the breadth-first search method. k temp=(cid:98)temp/L(cid:99) c =c ∪c IV. SIMULATIONRESULTS k k k−1 end To check the real performance of GSRA for solving the for k =1 to N do sparse signal recovery problem, it was compared with OMP, π =arg max(cid:107)(ΦTrk−1) (cid:107)2 // choose L best (cid:101) π 2 gOMP, MMP-DFS, MMP-BFS using the same setup adopted |π|=L indices in [8]. sk =sk−1∪π (cid:96) (cid:96) (cid:98)ck xk =Φ† y (cid:98) sk A. Reconstruction of Sparse Signals Whose Nonzero Values (cid:96) rk =y−Φ xk Follow N(0,1) sk(cid:98) (cid:96) end In this experiment, some important parameters were set as if (cid:107)rN(cid:107)≤ρ then supp=s, follows: ρ=(cid:107)rN(cid:107) ; 1) Generate a random matrix Φ of size m×n with i.i.d. end N(0,1) entries. To facilitate comparison, we set m = 128 and n=256. 2) Toreducethetimecomplexity,setthemaximumnumber Algorithm 3: [x, S]=RectifySupport(Λ,y,Φ,r ,α) (cid:98) new of candidates of sub-tree N = 30, the number of max Data: the support set Λ, backtrack subspace parameter paths L=2, and the search depth N =7. T (T ≤K) and residual r new 3) Choose a K-subset of {1,2,··· ,n}, K =1,2,··· ,60. Result: x (estimated signal), S (estimated support) 4) Set the value of x in the K chosen indices as random while T !=0 do nonzero values obeying N(0,1) and that in the remain- S = T indices of highest amplitude components of ΦTr , Λ=Λ∪S; x=A†y ing indices as zero. new Λ 5) Conduct1,000simulationstocalculatethefrequencyof S =K indices corresponding to the largest magnitude components of x, x=Φ†y; the exact reconstruction and the mean time needed for (cid:98) S the different algorithms. r =y−Φx; T =αT new (cid:98) end Figure 2 depicts the results of the above experiments. Figure 2a) demonstrates that GSRA can achieve an exact reconstruction with a greater frequency than OMP, gOMP, MMP-DFSandMMP-BFSforGaussiansignalofanysparsity at the cost of increasing the value of the search depth. To level. Concretely, OMP, gOMP, MMP-DFS and MMP-BFS make Φ satisfy RIP and S include the correct indices in each can accurately reconstruct a signal of various sparsity levels, optimal path as much as possible, the value of N can be set such as 21, 40, 35, and 43. In contrast, GSRA can still do appropriately with the upper bound K. thiswhenthelevelreaches50.Figure2b)showsthatthetime neededincreaseswithanincreaseinthesparsitylevel,andthe B. Computational Complexity Analysis meantimeneededbyGSRAisshorterthanthatofMMP-DFS and MMP-BFS, but longer than that of OMP and gOMP. The complexities of the three stages of GSRA can be estimated separately. The complexity for obtaining the initial support set is B. Reconstruction of 0–1 Sparse Signals In this experiment, we choose a K-subset of {1,2,··· ,n}, O (mn·K)+O (mn·K)≈O(mn·K), omp sp K =1,2,··· ,50. For each value of K, we generate a signal where O (mn ·K) and O (mn ·K) are derived in [8]. x of sparsity level K, where the nonzero coefficients are set omp sp The number of candidates in the kth iteration is LN. To to one and those in the remaining indices to zero. The other decrease the computational complexity,the maximum number configurations are the same as in the above sub-section. of candidates N is less than LN. Obtaining one index Figure3a)showsthattheGSRAperformsbetterthanOMP, max requires O(mn) according to the expression ΦTrk. Choosing gOMP, MMP-DFS and MMP-BFS in terms of the frequency N indices from N candidates in each iteration means that of exact reconstruction for 0–1 sparse signals of any sparsity max thecomplexityofsearchingthehope-treeisO(N ·mn).As level.OMP,gOMP,MMP-DFSandMMP-BFScanaccurately max the number of iterations is set to be iter <K, the complexity reconstruct signals of various sparsity levels: 10, 28, 20, and IEEESIGNALPROCESSINGLETTERS 4 1 1 0.9 n n o 0.8 o0.8 ucti ucti econstr 00..67 econstr0.6 ofexactr 00..45 of exact r0.4 y y c c n 0.3 n GSRA e GSRA e u u MMP-BFS eq 0.2 MMP-BFS eq0.2 MMP-DFS Fr MMP-DFS Fr gOMP gOMP 0.1 OMP OMP 0 0 0 10 20 30 40 50 0 10 20 30 40 50 60 Sparsitylevel Sparsity level a) a) 101 100 100 ) c c) Se (Se 10-1 ed(10-1 d d e e d e e n ne 10-2 me10-2 me ti ti an an 10-3 Me Me GSRA 10-3 GSRA MMP-BFS MMP-BFS 10-4 MMP-DFS MMP-DFS gOMP gOMP OMP 10-4 OMP 10-5 0 10 20 30 40 50 60 0 5 10 15 20 25 30 35 40 45 50 Sparsitylevel Sparsitylevel b) b) Fig. 2. Performance comparison of reconstructing typical sparse signals in terms of two metrics: a) frequency of exact reconstruction; b) mean time Fig.3. Performancecomparisonofreconstructinga0–1sparsesignalinterms needed. oftwometrics:a)frequencyofexactreconstruction;b)meantimeneeded. 23. Meanwhile, GSRA can obtain satisfying results when the where E(•) denotes the mathematical expectation operator, sparsity level is increased to 30. As shown in Figure 3b), the σ2 and σ2 denote the power of each element of the signal s l comparison of the results of GSRA with the other reference andnoisevector,respectively.Tomeasuretherecoverydegree algorithmsintermsofthemeantimeneededofGSRAarethe more accurately, the Signal-to-Reconstruction-Error Ratio same as that in the above subsection. E(cid:107)x(cid:107)2 SRER =∆ 10log 2 C. Reconstruction of Synthetic Signals 10 E(cid:107)x−x(cid:98)(cid:107)22 To further verify the performance of GSRA in comparison is adopted here instead. with the four reference algorithms, some experiments were The concrete steps of the experiment are described as done for the recovery of an additive synthetic signal follows. y =Φx+w. (4) 1) Set the sparsity level K = 20 and fix n = 500. Then, adjust the value of the sampling rate ϕ = m/n so that The synthesis degree is measured by the Signal to the number of measurements m is an integer. Measurement-Noise Ratio 2) GenerateelementsoftheΦindependentlyfromasource E(cid:107)x(cid:107)2 K·σ2 following N(0,1) and normalize each column norm to SMNR =∆ 10log 2 =10log s, (5) 10 E(cid:107)w(cid:107)2 10 m·σ2 unity. 2 l IEEESIGNALPROCESSINGLETTERS 5 3) For the noisy region of the signal x, the additive noise [8] W.DaiandO.Milenkovic,“Subspacepursuitforcompressivesensing w is set as a Gaussian random vector whose elements signalreconstruction,”IEEETrans.Inf.Theory,vol.55,no.5,pp.2230– are independently chosen with distribution N(0,σ2). 2249,2009. w [9] D. Needell and J. A. Tropp, “Cosamp: iterative signal recovery from 4) Select a measurement vector with SMNR =20. incomplete and inaccurate samples,” Commun. ACM, vol. 53, no. 12, 5) Apply the reconstruction methods in Algorithm 1. pp.93–100,2010. [10] S. Kwon, J. Wang, and B. Shim, “Multipath matching pursuit,” IEEE 6) Repeat steps 3)–5) T =10 times. Trans.Inf.Theory,vol.60,no.5,pp.2986–3001,2014. 7) Repeat steps 2)–6) Q=100 times. [11] J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching 8) Calculate the average value of the SRER scores for the pursuit,”IEEETrans.SignalProcess.,vol.60,no.12,pp.6202–6216, 2012. T ·Q signals. AsforGSRA,theotherparametersaresameasinSec.IV-A exceptthesearchdepth.Withanincreaseofdepth,theperfor- manceofGSRAisimproved.Typically,thesearchdepthisset to be 16. The above eight experimental steps were executed with the sampling rate ϕ ranging from 0.15 to 0.20, and the obtained results are listed in Table I, which demonstrates that GSRA performs better than any other reference scheme in each case. TABLEI PERFORMANCECOMPARISONOFTHERECOVERYALGORITHMSINTERMS OFSRERUNDERSOMESAMPLINGRATES. (cid:80) (cid:80) (cid:80)Name ϕ (cid:80)(cid:80)(cid:80) OMP gOMP MMP-DFS MMP-BFS GSRA 0.15 8.11 7.75 15.74 15.38 16.05 0.16 9.57 9.08 18.09 17.63 18.30 0.17 12.24 11.37 18.47 17.51 19.07 0.18 14.71 13.48 19.67 18.51 19.20 0.19 17.49 15.08 20.28 19.43 20.39 0.20 19.31 15.90 20.89 20.04 20.98 V. CONCLUSION To improve the accuracy of sparse signal recovery, this paper proposed an iterative greedy reconstruction algorithm by examining multiple promising candidates with the help of greedy search. The key feature of the algorithm is to use a hope-tree and the decreasing subspace pursuit method to obtain the final support set. Detailed experimental results demonstrated that the proposed algorithm performs well for Gaussian signals, 0–1 sparse signals, and synthetic signals. ResearchonmakingthedepthoftheGSRAadaptivedeserves further exploration. REFERENCES [1] E. J. Cande`s, J. Romberg, and T. Tao, “Robust uncertainty principles: Exactsignalreconstructionfromhighlyincompletefrequencyinforma- tion,”IEEETrans.Inf.Theory,vol.52,no.2,pp.489–509,2006. [2] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans.Inf.Theory,vol.51,no.12,pp.4203–4215,2005. [3] Y. Shen, J. Fang, and H. Li, “Exact reconstruction analysis of log- summinimizationforcompressedsensing,”IEEESignalProcess.Lett., vol.20,no.12,pp.1223–1226,2013. [4] J.-F. Determe, J. Louveaux, L. Jacques, and F. Horlin, “On the exact recoveryconditionofsimultaneousorthogonalmatchingpursuit,”IEEE SignalProcess.Lett.,vol.23,no.1,pp.164–168,2016. [5] S.S.Chen,D.L.Donoho,andM.A.Saunders,“Atomicdecomposition bybasispursuit,”SIAMRev.,vol.43,no.1,pp.129–159,2001. [6] J. Wang and B. Shim, “On the recovery limit of sparse signals using orthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 60, no.9,pp.4973–4976,2012. [7] D.L.Donoho,Y.Tsaig,I.Drori,andJ.-L.Starck,“Sparsesolutionof underdetermined systems of linear equations by stagewise orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 1094– 1121,2012.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.