ebook img

Generic Cospark of a Matrix Can Be Computed in Polynomial Time PDF

0.3 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Generic Cospark of a Matrix Can Be Computed in Polynomial Time

Generic Cospark of a Matrix Can Be Computed in Polynomial Time Sichen Zhong∗ and Yue Zhao† ∗Department of Applied Mathematics and Statistics, †Department of Electrical and Computer Engineering Stony Brook University, Stony Brook, NY, 11794, USA Emails: {sichen.zhong, yue.zhao.2}@stonybrook.edu Abstract—The cospark of a matrix is the cardinality of the under which sparse recovery can be performed using l-1 sparsest vector in the column space of the matrix. Computing relaxation. Last but not least, in addition to its role in the thecosparkofamatrixiswellknowntobeanNPhardproblem. sparserecoveryliterature,cospark(1)alsoplaysacentralrole 7 Given the sparsity pattern (i.e., the locations of the non-zero insecurityproblemsincyber-physicalsystems(see[6]among 1 entries) of a matrix, if the non-zero entries are drawn from 0 independently distributed continuous probability distributions, others). 2 weprovethatthecosparkofthematrixequals,withprobability In this paper, we study the problem of computing the one, to a particular number termed the generic cospark of cospark of a matrix. Although it is proven that (1) is an n the matrix. The generic cospark also equals to the maximum a NP hard problem, we show that the cospark that a matrix cospark of matrices consistent with the given sparsity pattern. J “generically”hascaninfactbecomputedinpolynomialtime. We prove that the generic cospark of a matrix can be computed 1 in polynomial time, and offer an algorithm that achieves this. Specifically, given the “sparsity pattern”, (i.e., the locations 3 of all the non-zero entries of A,) cospark(A) equals, with I. INTRODUCTION probability one, to a particular number which we termed the ] T The cospark of a matrix A ∈ Rm×n,m > n1, denoted by generic cospark of A, if the non-zero entries of A are drawn I cospark(A),isdefinedtobethecardinalityofthesparsestvec- from independent continuous probability distributions. Then, . s torinthecolumnspaceofA[1].Inotherwords,cospark(A)is we develop an efficient algorithm that computes the generic c the optimum value of the following l -minimization problem: cospark in polynomial time. [ 0 1 minimize ||Ax||0, (1) II. PRELIMINARIES v x 5 subject to x(cid:54)=0, A. Generic Rank of a Matrix 2 For a matrix A ∈ Rm×n, we define its sparsity pattern where ||Ax|| is the number of nonzero elements in the 9 0 S ={(i,j)|A (cid:54)=0,1≤i≤m,1≤j ≤n}.Givenasparsity 8 vector Ax. It is well known that solving (1) is an NP-hard ij pattern S, we denote AS to be the set of all matrices with 0 problem.Indeed, itisequivalent tocomputingthe spark ofan 1. orthogonal complement of A [1], where the spark of a matrix sparsity pattern S over the field R. Since there isa one to one mappingbetweenSandAS,weuseSandAS interchangeably 0 is defined to be the smallest number of linearly dependent to denote a sparsity pattern in the remainder of the paper. 7 columns of it [2]. Specifically, for A with a full column rank, 1 we can find an orthogonal complement A⊥ of it, and (1) is The generic rank of a matrix with sparsity pattern S is : defined as follows. v equivalent to Xi minimize ||x|| , (2) Definition 1 (Generic Rank). Given S, the generic rank of r x 0 AS is sprank(AS)(cid:44)supA∈ASrank(A). a subject to A⊥x=0, x(cid:54)=0, Clearly, if sprank(AS) < n, the optimal value of (1) is and the optimal value of (2) is the spark of A⊥, denoted by zero. We will thus focus on the case sprank(AS)=n for the spark(A⊥). Computing spark is known to be NP hard [3]. remainder of the paper. Theroleofcospark(A)hasbeenstudiedindecodingunder The following lemma states that the generic rank indeed sparse measurement errors where A is the coding matrix [1]. “generically” equals to the rank of a matrix [7]. In particular, cospa2rk(A) gives the maximum number of errors Lemma 1. Given S, rank(A) = sprank(AS) with prob- that an ideal decoder can tolerate for exact recovery. Closely ability one, if the non-zero entries of A are drawn from related to this is the role of spark(A⊥) in characterizing the independentlydistributedcontinuousprobabilitydistributions. ability to perform compressed sensing [1] [2]. Spark is also relatedtonotionssuchasmutualcoherence[2][4]andRestrict B. Matching Theory Basics Isometry Property (RIP) [1] [5] which provide conditions We now introduce some basics from classical matching theory [8] which are necessary for us to introduce the results 1Wenotethattheresultsinthispapercanbestraightforwardlygeneralized tocomplexnumbers. in the remainder of the paper. ForabipartitegraphG(X,Y,E),asubsetofedgesN ⊆E Lemma 3. Given any sparsity pattern S so that is a matching if all the edges in N are vertex disjoint. sprank(AS) = n for AS ⊂ Rm×n, for any A ∈ AS, A max matching from X onto Y is a matching with the if rank(A ) = sprank(AS),∀T ⊆ [m], then T T maximum cardinality. A perfect matching from X onto Y is a cospark(A)=spcospark(AS). max matching where every vertex in Y is incident to an edge Proof. Let x∗ = argmin ||Ax|| , and suppose U = in the matching. x(cid:54)=0 0 {i|a x∗ = 0}, where a is the ith row of A. Since Consider a (not necessarily maximum) matching N. A i i A x∗ = 0, rank(A ) < n. Now consider another matrix vertex is called matched if it is incident to some edge in N, U U C ∈ Rm×n with sparsity pattern S. Since rank(C ) ≤ andunmatched otherwise.Analternating pathwithrespectto U rank(A ) = sprank(AS) < n, ker(C ) is also nonempty, N isapathwhichalternatesbetweenusingedgesinE\N and U U U meaning there exists a nonzero vector h ∈ Rn such that edges in N, or vice versa. An augmenting path w.r.t N is an C h = 0. Because A x∗ has no zero entries, we also alternating path w.r.t. N which starts and ends at unmatched U Uc have ||C h|| ≤ ||A x∗|| = ||Ax∗|| . This means vertices. With an augmenting path P, it can be easily shown Uc 0 Uc 0 0 ||Ch|| = ||C h|| + ||C h|| ≤ ||A x∗|| = ||Ax∗|| . that the symmetric difference2 N ⊕P gives a matching with 0 U 0 Uc 0 Uc 0 0 Hence, if xˆ = argmin ||Cx|| , it follows cospark(C) = size |N|+1. x(cid:54)=0 0 ||Cxˆ|| ≤ ||Ch|| ≤ ||Ax∗|| = cospark(A), which proves 0 0 0 the lemma. C. Generic Rank as Max Matching We now introduce an equivalent definition of generic rank Wenotethatthepropertyrank(AT)=sprank(AST),∀T ⊆ viamatchingtheory.AsparsitypatternAS canberepresented [m] is known as the matching property of matrix A [10]. as a bipartite graph as follows [7]. Let G(X,Y,E) be a bi- Now, we have the following theorem showing that the partitegraphwhosea)verticesX ={1,2,...,m}correspond generic cospark indeed “generically” equals to the cospark. to all the row indices of AS, b) vertices Y = {1,2,...,n} Theorem 1. Given S, cospark(A) = spcospark(AS) with correspond to all the column indices of AS, and c) edges probability one, if the non-zero entries of A are drawn from in E = S correspond to all the non-zero entries of AS. independentlydistributedcontinuousprobabilitydistributions. Accordingly, we also denote the bipartite graph for a sparsity pattern S by G(X,Y,S). Proof. If we have a matrix A with sparsity pattern S whose The following lemma states the equality between nonzerosaredrawnfromindependentcontinuousdistributions, sprank(AS) and the max matching of G(X,Y,S) [7]. theneverysubmatrixofrowshasrankequalinggenericrankw. p. 1 (cf. Lemma 1). This immediately implies cospark(A)= Lemma 2. Given G(X,Y,S), the generic rank sprank(AS) spcospark(AS) w. p. 1 by Lemma 3. equals to the cardinality of the maximum bipartite matching on G. B. A Matching Theory based Definition of Generic Cospark Let G(X,Y,S) be the bipartite graph corresponding to Accordingly, finding a max matching on this graph using AS ⊆ Rm×n. For a subset of vertices Z ⊆ X, we theHopcroft-Karpalgorithmallowsustofindthegenericrank √ define the induced subgraph G(Z) as the bipartite graph with O(|S| m+n) complexity [9]. G(Z,N(Z),{(i,j)|i∈Z ,j ∈N(Z)}),whereN(Z)denotes the vertices in Y adjacent to the set Z. G(Z) is essentially a III. GENERICCOSPARK bipartite graph corresponding to the submatrix AS. We then Z Similarly to the supremum definition of generic rank (cf. have the following. Definition 1), given the sparsity pattern of a matrix, we define generic cospark as follows. Lemma4. GivenG(X,Y,S),letOPT ⊂X bealargestsubset suchthattheinducedsubgraphG(OPT)hasamaxmatching Definition2(GenericCospark). GivenS,thegeneric cospark of size n−1. We have that spcospark(AS)=m−|OPT|. of AS is spcospark(AS)(cid:44)sup cospark(A). A∈AS The intuition behind this matching theory based definition In a spirit similar to the multiple interpretations of generic of spcospark(AS) is the following. To find the sparsest vector rank as in Section II, we provide a probabilistic view and a in the image of A, it is equivalent to find a largest set of rows matching theory based view of generic cospark as follows. in A, OPT, which span an n−1 dimensional subspace. With such a subset OPT, we can find a vector x∗ that satisfies A. Cospark Equals to Generic Cospark With Probability One A x∗ = 0, and it is clear that x∗ ∈ argmin ||Ax|| . OPT x(cid:54)=0 0 For any T ⊂ [m], let AT and AST represent the matrix Furthermore, based on the equivalence between generic rank A and the set of matrices AS restricted to the rows T and max matching from Lemma 2, we arrive at the matching respectively. A class of matrices which has cospark equal to theory based definition of generic cospark in Lemma 4. genericcosparkarethosewhichsatisfythefollowingproperty: IV. EFFICIENTALGORITHMFORCOMPUTINGGENERIC COSPARK In this section, we introduce an efficient algorithm that 2ThesymmetricdifferenceoftwosetsS1 andS2 isdefinedasS1⊕S2= (S1∪S2)\(S1∩S2). computes the generic cospark. This algorithm is based on a greedy approach motivated by Lemma 4. X Y Given G(X,Y,S), for any size n − 1 subset of vertices W ⊂ Y, we define X = {x ∈ X|N(x) ⊆ W}. In other W words, X is the index set of rows of AS with a zero entry W in the remaining coordinate v =Y \W. We use X as a basis to construct a candidate solution W v=Y∖W✶ for OPT. The idea is to add a maximal subset of vertices B ⊂XWc to XW, such that XW =XW ∪B has a matching I∩XW✶ I of size n−1 onto Y. Specifically, we keep adding vertices t ∈ Xc to B as long as the submatrix corresponding to the indexsWetX ∪B hasgenericranknogreaterthann−1.The W✶ N(XW✶)∖N(J) W following lemma shows that adding a vertex to B can only J XW✶ N(XW✶) increase the generic rank of X ∪B by at most one. W N(J) Lemma 5. Given G(X,Y,S), ∀Z ⊂ X and u ∈ X \ Z, sprank(AS )≤sprank(AS)+1. C∖I Z∪{u} Z Remark 1. For a given W, depending on the order we visit the vertices in Xc , we could end up with different sets B, W Fig.1. Theabovegraphrepresentsasketchofthepartitionofthenodes.The possibly of different sizes. However, we will prove that the black continuous line segments are unmatched edges in the bipartite graph. optimal solution is recovered regardless. TheredcontinuouslinesegmentscompriseM,whichformsan−1matching from I to W∗. The pair of pink dotted line segments denote the range of XW,∀W are the candidate solutions for OPT, and we verticesXW∗ isincidentto.Finally,thepairofgreendottedlinesegments denotetherangeofverticesJ isincidentto. obtain the optimal solution by choosing the X with the W largest cardinality, i.e. X = argmax |X |. f W⊂Y,|W|=n−1 W The generic cospark of AS then equals to m−|X |. f To prove this, let us consider an optimal set OPT ⊂ X. The detailed algorithm is presented in Algorithm 1. WedenotebyMthesetofn−1edgesofamaxmatchingof G(OPT). We denote by W∗ ⊂Y the set of n−1 vertices in Algorithm 1 Computing Generic Cospark Y correspondingtothismaxmatching,anddenotebyv =Y\ 1: procedure SPCOSPARK(AS) W∗theremainingvertexinY.Wewillshowthat,startingwith 2: Initialization: Set B =∅,t=∅, and Xf = ∅ W∗, Algorithm 1 will return an X such that |X |≥|OPT|, f f 3: for all W ⊂Y of cardinality n−1 do and hence |X | = |OPT|. As the notations for this section f 4: Scan through all m vertices in X are quite involved, an illustrative diagram is plotted in Figure to find X and let T =Xc W W 1 to help clarify the proof procedure in the following. 5: Calculate sprank(AS ) XW We first partition OPT into OPT = I ∪J, I ∩J = ∅, 6: while sprank(AS )≤n−1 do XW∪B∪{t} where I is the set of n−1 vertices in OPT corresponding 7: Let B =B∪t to the max matching M. Hence, I perfectly matches onto 8: Choose any element t from T, and W∗ with M. J consists of the remaining vertices in OPT set T =T \t unmatched by M. 9: end while and let XW =XW ∪B WLOG, we assume J is nonempty. This is because, if J 10: if |Xf|<|XW| then is empty, we then immediately have |OPT|=n−1≤|X |. 11: Set Xf =XW We then have the following lemma about I and J. f 12: end if 13: Set B =∅ Lemma 6. For any such partition OPT = I ∪J, we have 14: end for that J ⊂X , and I∩X is nonempty. W∗ W∗ 15: Return Xf, and spcospark(AS)=m−|Xf|. Proof. Let OPT be partitioned into I ∪J. Suppose j ∈ J. 16: end procedure If j ∈/ X , then j is incident to v, which means I ∪{j} W∗ has a perfect matching onto Y. This contradicts OPT has no V. PROOFOFOPTIMALITYOFALGORITHM1 perfectmatchingontoY.NowsupposeI∩XW∗ isempty.This In this section, we prove that Algorithm 1 indeed solves meanseveryvertexinI isincidenttov.SinceI hasaperfect the generic cospark. It is sufficient to prove that the set X matching onto W∗ and vertices in J are incident to vertices f returned by the Algorithm satisfies the definition of OPT in in W∗, it follows there exists an augmenting path from any Lemma 4, i.e., Xf is a subset of vertices of the largest size vertexinJ tov,whichisacontradictiontosprank(ASOPT)= suchthattheinducedsubgraphG(X )hasamaxmatchingof n−1. f size n−1. Since G(X ) by construction has a max matching f of size n−1, it is sufficient to prove that X has the largest Accordingly, we can partition X =C∪J, C∩J =∅, f W∗ size, i.e., |X |=|OPT|. with C (cid:44) X \J. Starting from here, the general idea of f W∗ proving |X |≥|OPT| is to lower bound C =X \J) to unmatched vertices in N(X )\N(J). A f W∗ W∗ corollary which will prove useful is the following: |X |=|X ∪B|=|C∪J ∪B|=|C|+|J|+|B|. (3) f W∗ Corollary1. SupposeP isanaugmentingpathfromc∈C\I We immediately have the following lower bound on |B|: to u ∈ N(X )\N(J) w. r. t. the matching M . Then for W∗ p |B|≥(n−1)−sprank(AS ). (4) any j ∈ J, there exists no alternating path w. r. t. M from XW∗ p j to any vertex in P. This is because a) Algorithm 1 guarantees that sprank(AS ) = n − 1, and b) every time we add Proof. Let P be an augmenting path from c to u w.r.t. M . W∗∪B p a new vertex t into B (cf. Line 7 in Algorithm 1), Suppose there exists an alternating path P(cid:48) from j to a jp sprank(AS ) increases by at most one (cf. Lemma 5). vertex p, where p is the first vertex in P encountered when W∗∪B Since the initial generic cospark is sprank(AS ), we need traversing P(cid:48) . P(cid:48) must have odd number of edges, since p XW∗ jp jp at least (n−1)−sprank(AS ) vertices added into B to is a matched vertex in P and j is unmatched. Since P(cid:48) is XW∗ jp reach sprank(ASW∗∪B)=n−1. odd, p ∈ N(XW∗). Hence, if Pcp ⊂ P is the restriction of We next devote the majority of this section to provide a P from c to p, then the alternating path P must also have cp lower bound on |C|. odd length. The total length of P must be odd since P is an augmenting path, which means the length of the alternating A. Lower Bounding |C| path from p to u in P must be even. The key result we will rely on in this subsection is the Since P is an odd alternating path from j to p, and jp following: the alternating path from p to u in P is even, then the Theorem 2. For the induced bipartite graph G(XW∗), there alternating path from Pjp to u is odd. Furthermore, j and exists a max matching that does not touch any vertices in J. u are unmatched, so this path is actually an augmenting path, which immediately contradicts Lemma 8. To prove Theorem 2, we start with a partial matching M ⊂ M consisting only edges that touch I ∩ X . In p W∗ FromCorollary1,anyalternatingpathstartingfromjw.r.t. other words, M = {(i,j) ∈ M|i ∈ I ∩X }. The idea is p W∗ M is vertex disjoint to any augmenting path P. This implies thatwewillbuildamaxmatchingstartingfromM ,andthis p p that a) any alternating path from j w. r. t. M ⊕P remains max matching will not touch any vertices in J, thus proving p an alternating path, and b) there remains no augmenting path Theorem 2. starting from j w. r. t. M ⊕P, i.e., Lemma 8 continues to We have the following two lemmas. p hold for G(X ) with a new matching M ⊕P. W∗ p Lemma7. FortheinducedbipartitegraphG(XW∗)withMp We are now ready to prove Theorem 2. as a (not necessarily max) matching, any vertex in N(J) is incident to some edge in M , i.e., already matched. p Proof of Theorem 2. Take M to be an initial matching onto p Proof. First,noteanyj ∈J isnotincidenttov,soanyvertex N(X ). By Lemma 7, all vertices in N(J) are now W∗ k ∈N(j) is in W∗. Now, for any j ∈J and any k ∈N(j), matched, and Lemma 8 tells us we are left with augmenting we want to prove k is incident to some edge in Mp. Since paths starting from unmatched vertices in C\I to unmatched k ∈ W∗, k is incident to some edge (i,k) ∈ M. M is the vertices in N(X )\N(J). If P is one such augmenting W∗ 1 perfect matching from I to W∗, so certainly, i is in I. On path,thenM ⊕P isamatchingwithonegreatercardinality. p 1 the other hand, i cannot be incident to v, or there will exist By Corollary 1, all alternating paths w.r.t M starting from p a length 3 augmenting path from j to k to i to v. Hence, j are vertex disjoint to P , which implies alternating paths 1 i∈X∗ , and the claim is proven. starting from j remain unchanged. Furthermore, Corollary 1 W tells us M ⊕ P does not have augmenting paths starting Lemma 8. For the induced bipartite graph G(X ) with p 1 W∗ from j. Hence, the only remaining augmenting paths are still M as a (not necessarily max) matching, there exists no p fromverticesC\I toverticesN(X )\N(J).IfP issuch augmenting path starting from any j ∈J. W∗ 2 an augmenting path, we can now repeat the above procedure Proof. Foranyvertexu∈N(XW∗)\N(J)thatisunmatched and compute the matching Mp⊕P1⊕P2. Again, alternating w. r. t. Mp, suppose there is an augmenting path from j to u paths starting from j remain unchanged, and Mp⊕P1⊕P2 using edges in Mp. If u is unmatched in the induced graph contains no augmenting paths starting from j. We can repeat G(XW∗) w.r.t Mp and u ∈ W∗, then there exists an edge this procedure until all augmenting paths from C \ I to (i,u) ∈ M\Mp which is incident to u. Because (i,u) ∈ N(XW∗) \ N(J) are eliminated. Since the final matching M\Mp, i must be incident to v. This means if there exists obtainedthiswayhasnoaugmentingpaths,thisfinalmatching anaugmentingpathfromjtouw.r.tMp,thentheremustexist is optimal, and its edges are incident to no vertices in J. an augmenting path from j to v w.r.t M, which contradicts vertices in J do not have augmenting paths to v. As a result of Theorem 2, there exists a max matching of Lemma8impliesthatallaugmentingpathsw.r.t.thepartial thebipartitegraphG(X )that,onthe“lefthandside”ofthe W∗ matching M are from unmatched vertices in C \I (where graph, only touches vertices in C =X \J. Since the size p W∗ √ of the max matching of G(X ) equals to sprank(cid:0)AS (cid:1) thetotalrunningtimeofAlgorithm1toO(n m|S|).Details W∗ XW∗ (cf.Lemma2),wearriveatthefollowinglowerboundon|C|: are omitted here. |C|≥sprank(cid:0)AS (cid:1). (5) VII. EXPERIMENTALRESULTSFORVERIFICATION XW∗ We compare the results from our algorithm of finding the B. Proof of the Optimality of Algorithm 1 generic cospark to a brute force algorithm of finding the We now show that Algorithm 1 indeed returns the generic cospark. Because the brute force algorithm has a computa- cospark as in the following theorem. tional complexity of O(mn), we limit the size of the test Theorem 3. For the X that Algorithm 1 returns, we have matrices to m=20 and n=5. f that |X |=|OPT|. We run our comparison over 10 different sparsity levels f spaced equally between zero and one. For each sparsity Proof. By the definition of OPT, |X | ≤ |OPT|. To prove f level, we generate 50 matrices, where the locations of the |X |≥|OPT|, starting from (3), f nonzero entries are chosen uniformly at random given the sparsitylevel,andthevaluesofthenon-zeroentriesaredrawn |X |=|C|+|J|+|B| (6) f from independent uniform distributions in [0,1]. For each of ≥sprank(AS )+|J|+|B| (7) XW∗ these 50 matrices, we compare the generic cospark given by ≥sprank(AS )+|J|+(n−1)−sprank(AS ) Algorithm 1 versus that given by the brute force method. XW∗ XW∗ (8) In every case, the solutions of both algorithms match. These =|J|+(n−1)=|I|+|J|=|OPT|, (9) results support the fact that our algorithm not only computes the generic cospark in polynomial time, but also obtains the where (7) is from (5), and (8) is from (4). actual cospark w. p. 1 if the non-zero entries are drawn from independent continuous probability distributions. VI. ALGORITHMCOMPLEXITY We now show that Algorithm 1 is efficient, and provide an VIII. CONCLUSION upper bound on its computational complexity. We have shown that, although computing the cospark of a matrix is an NP hard problem, computing the generic Theorem 4. Given any S, Algorithm 1 computes cospark can be done in polynomial time. We have shown spcospark(AS) in O(nm(1+|S|)) time. that, given any sparsity pattern of a matrix, the cospark is Proof. Observe in the pseudocode above, step 3 is over n always upper bounded by the generic cospark, and is equal iterations. For each iteration, steps 4 to 9 are the most to the generic cospark with probability one if the nonzero computationally expensive. Step 4 requires a O(m) scan of entries of the matrix are drawn from independent continuous the rows of AS, and step 5 requires us to compute a perfect probability distributions. An efficient algorithm is developed matching using Hopcroft-Karp algorithm, which can be done that computes generic cospark in polynomial time. √ in O(|S| m+n) time. REFERENCES For the loop in steps 6 to 9, we do not need to recalculate sprank(AS ) every iteration. Given we know the max [1] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE {XW∪B} Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, matching from the previous iteration, we only need to check 2005. if the new vertex t added to B has an augmenting path to [2] D.L.DonohoandM.Elad,“Optimallysparserepresentationingeneral an unmatched vertex in Y. Searching for this augmented path (nonorthogonal) dictionaries via l-1 minimization,” Proceedings of the NationalAcademyofSciences,vol.100,no.5,pp.2197–2202,2003. requires us to use breadth first search (BFS) or depth first [3] A. M. Tillmann and M. E. Pfetsch, “The computational complexity search (DFS), which can be computed in O(|S|) time. Since of the restricted isometry property, the nullspace property, and related there are O(m) iterations in the while loop, the total cost of concepts in compressed sensing,” IEEE Transactions on Information Theory,vol.60,no.2,pp.1248–1259,2014. steps 6 to 9 is O(m|S|). [4] R. Gribonval and M. Nielsen, “Sparse representations in unions of H√ence,foreveryiterationofstep3,thetotalcostisO(m+ bases,”IEEETransactionsonInformationTheory,vol.49,no.12,pp. |S| m+n+m|S|)=O(m(1+|S|))sincen≤m.Itfollows 3320–3325,2003. [5] E.J.Candes,J.K.Romberg,andT.Tao,“Stablesignalrecoveryfrom immediately our total running time is O(nm(1+|S|)). incompleteandinaccuratemeasurements,”Communicationsonpureand appliedmathematics,vol.59,no.8,pp.1207–1223,2006. From Theorem 4, if AS is extremely sparse, the running [6] Y.Zhao,A.Goldsmith,andH.V.Poor,“Minimumsparsityofunobserv- time of Algorithm 1 is essentially quadratic. ablepowernetworkattacks,”IEEETransactionsonAutomaticControl, toappear. Remark 2. The algorithm’s bottleneck is in steps 6-9. For [7] K. Reinschke, Multivariable Control - A Graph-Theoretic Approach. each row t to add, we need to use a BFS. Since we need to New York: Springer-Verlag, Lecture Notes in Control and Information Sciences,vol.108,1988. add O(m) such vertices, the total complexity for these steps [8] R. Diestel, D. Kra´l, and P. Seymour, “Graph theory,” Oberwolfach is O(m|S|) as in the above proof. To improve this complexity, Reports,vol.13,no.1,pp.51–86,2016. we would like to detect multiple candidate rows to add to B [9] J. E. Hopcroft and R. M. Karp, “An nˆ5/2 algorithm for maximum matchingsinbipartitegraphs,”SIAMJournaloncomputing,vol.2,no.4, using a single BFS. Indeed, it can be shown further that steps √ pp.225–231,1973. 6-9ofAlgorithm1canbeimprovedtoO( m|S|)basedonan [10] S. T. McCormick, “A combinatorial approach to some sparse matrix ideasimilartoHopcroft-Karpmatching[9].Thiswillimprove problems.”DTICDocument,pp.39–41,1983.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.