ebook img

Robust Group LASSO Over Decentralized Networks PDF

0.26 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Robust Group LASSO Over Decentralized Networks

ROBUST GROUPLASSO OVER DECENTRALIZED NETWORKS Manxi Wang* YongchengLi* XiaohanWei† QingLing‡ *StateKeyLaboratoryofComplexElectromagneticEnvironmentEffectsonElectronicsandInformationSystem,Luoyang,China †DepartmentofElectricalEngineering,UniversityofSouthernCalifornia,LosAngeles,USA ‡DepartmentofAutomation,UniversityofScienceandTechnologyofChina,Hefei,China 7 1 ABSTRACT Hereλisanonnegativetrade-offparameter.Akeyassumptionlead- 0 ing to the success of such model is the sub-Gaussianity of errors. 2 Thispaperconsiderstherecoveryofgroupsparsesignalsovera However,inmanyapplications,themeasurementsoftheagentsmay multi-agentnetwork,wherethemeasurementsaresubjecttosparse n beseriouslycontaminatedorevenmissingduetouncertaintiessuch errors. WefirstinvestigatetherobustgroupLASSOmodel andits a assensor failureortransmissionerrors. Thiskindofmeasurement centralized algorithm based on the alternating direction method of J errorsareoftensparse[10]. Hence,anaturalextensionof(3)isto 1 multipliers(ADMM),whichrequiresacentralfusioncentertocom- exploitthestructuresofboththesignalmatrixYandthesparseerror puteaglobalrow-supportdetector. Toimplementitinadecentral- 1 matrixSbysolving izednetworkenvironment,wethenadoptdynamicaverageconsen- ] susstrategiesthatenabledynamictrackingoftheglobalrow-support min Y 2,1+λ S 1, (4) C detector. Numerical experiments demonstrate the effectiveness of Y,S k k k k D theproposedalgorithms. s.t. M=[A(1)y1,··· ,A(L)yL]+S. Index Terms— Decentralized optimization, dynamic average s. consensus,groupsparsity,alternatingdirectionmethodofmultipli- ThismodelistermedasrobustgroupLASSO,whoseperformance c guaranteeisgivenin[11]. Undermildconditions,therobustgroup ers(ADMM) [ LASSOmodel isabletosimultaneously recover thetruevaluesof YandSwithhighprobability. 1 1. INTRODUCTION v 1.2. OurContributions 3 Suppose that L distributed agents constitute a bidirectionally con- 4 This paper develops efficient algorithms to solve the robust group nectednetworkandsensecorrelatedsignalsundersparsemeasure- 0 LASSOmodel(4).Ourcontributionsareasfollows. menterrors.Themeasurementequationofagentlis 3 0 (i) Weproposeacentralizedalgorithmthatisbasedonthealter- m =A y +s, (1) . l (l) l l natingdirectionmethodofmultipliers(ADMM),apowerful 1 operator-splittingtechnique. Onesubproblemofthecentral- 0 where m M is the measurement vector, A is the sensing 7 matrix, yl ∈ RN is the unknown signal vector, a(ln)d s M is izedalgorithmisthetraditionalgroupLASSOmodel,which l l isapproximatelysolvedbyablockcoordinatedescent(BCD) 1 ∈ R ∈ R the unknown sparse error vector. We are particularly interested in approachthroughsuccessivelyestimatingtherow-supportof : v acertaincorrelationpatternof thesignal vectors, wherethesignal thesignalmatrixY. i matrixY=[y1,...,yL] N×Lisgroupsparse,meaningthatY X issparseanditsnonzeroen∈triResappearinasmallnumberofcommon (ii) Wedevelopdecentralizedversionsoftheabovealgorithmthat r rows. Define M M×L as the measurement matrix and S aresuitableforautonomouscomputationoverlarge-scalenet- a M×L as the spar∈seRerror matrix, the matrix form of the agents∈’ works. Sinceestimatingtherow-supportofthesignalmatrix Rmeasurementequationsis Yrequirescollaborativeinformationfusionofalltheagents, weproposetoachieveinexactinformationfusionthroughdy- M=[A y , ,A y ]+S. (2) namicaverageconsensustechniques, whichonlyrequirein- (1) 1 ··· (L) L formationexchangeamongneighboringagents. GivenMandA ’s,thegoalofthenetworkistorecoverYandS (i) fromthelinearmeasurementequation(2). 1.3. Notations Matricesaredenoted bybold uppercase lettersandvectorsarede- 1.1. RobustGroupLASSOModel noted by bold lowercase letters. For a matrix D, di denotes its The recovery of group sparse (also known as block sparse [1] or i-th row, dj denotes its j-th column, while dij denotes its (i,j)- jointly sparse [2]) signals finds a variety of applications such as th element. The ℓ -norm of D is D , ( d2)1/2, 2,1 k k2,1 i j ij direction-of-arrival estimation [3,4], collaborative spectrum sens- the ℓ -norm is D , d , and the Frobenius norm is ing[5–7]andmotiondetection[8]. Awell-knownmodeltorecover 1 k k1 i j| ij| P P D ,( d2)1/2. groupsparsesignalsisgroupLASSO(leastabsoluteshrinkageand k kF i j ij P P The multi-agent network is described as a bidirectional graph selectionoperator)[9],whichsolves P P ( , ).Iftwoagentsr,l areneighbors,thentheycancommuni- L E ∈L mYin kYk2,1+λkM−[A(1)y1,··· ,A(L)yL]k2F. (3) ccaotmemwuitnhiceaatcihonotehdegrew. ithinonehop,and(r,l)∈E isabidirectional 2. CENTRALIZEDROBUSTGROUPLASSO 2.2. UsingBCDtoSolve(5) Optimallysolving(4)isnontrivialsincetheobjectivefunctionisa TosetuptheiterativeBCDalgorithmthatsolves(5)attimet, we weightedsummationoftwononsmoothfunctions Y and S , dividetimetintoP slots.Attimetslotp(p=0,1, ,P 1),we whereYandSareentangledintheconstraint. Tkherekf2o,r1ewekresko1rt linearizetheFrobeniusnormtermin(5)withrespec·t··toY−(t+ Pp) andaddanextraquadraticregularizationterm,whichgives tothealternatingdirectionmethodofmultipliers(ADMM)tosplit thetwoentangledvariablesYandSsuchthattheresultingsubprob- p β p lemsareeasiertosolve. mYin kYk2,1+βhV(t+ P),Yi+ 2τkY−Y(t+ P)k2F, (9) 2.1. UsingADMMtoSolve(4) where τ is a positive proximal parameter and the l-th column of V(t+ p) N×Lisdefinedas TheaugmentedLagrangianfunctionof(4)is P ∈R p p z(t) kYk2,1+λkSk1−hZ,[A(1)y1,··· ,A(L)yL]+S−Mi vl(t+ P)=AT(l) A(l)yl(t+ P)+sl(t)−ml− lβ . (10) β + 2k[A(1)y1,··· ,A(L)yL]+S−Mk2F, Notethat(9)isequiv(cid:0)alentto (cid:1) where Z M×L is the Lagrange multiplier and β is a positive penalty pa∈ramReter. The ADMM alternatingly minimizes the aug- min Y + β Y Y(t+ p)+τV(t+ p) 2, (11) Y k k2,1 2τk − P P kF mentedLagrangianfunctionwithrespecttoYandS,andthenup- datestheLagrangemultiplierZ[12]. Attimet,theADMMworks whichhasaclosed-formsolutiongivenbythesoft-thresholdingop- asfollows. erator[16].DenoteU(t+ p)=Y(t+ p) τV(t+ p) N×L First, fixing S = S(t) and Z = Z(t), we minimize the aug- whosen-throwisgivenbyPun(t+ p)=Pyn−(t+ p) Pτvn∈(tR+ p). mentedLagrangianfunctionrespecttoYtogetY(t+1). Simple AlsodenoteY(t+p+1) N×LPasthesolutionPof−(11).ThenP-th manipulationshowsthatitisequivalentto rowofY(t+ p+1)iPs ∈R P Y(t+1)=argmin Y (5) Y k k2,1 p+1 un(t+ p) p τ + β2k[A(1)y1,··· ,A(L)yL]+S(t)−M− Zβ(t)k2F. yn(t+ P )= kun(t+ PpP)k2 max(cid:0)0,kun(t+ P)k2− β(cid:1). nNoottehatvheata(5cl)oisseda-sftoarnmdasrodlugtrioounp. Wlasesowiplrlodbelveemlotphaatngeefnfiecriaelnlytadlogeos- row-Asugpapino,rtnodteetetchtaotrthoef ttheremnk-tuhnr(otw+ofPpY)k.2 cIafnkbuen(vtie+wedPp)aks2thies rithmtosolve(5)laterinthissection. smallerthanthethresholdτ/β,thenyn(t+ p+1)issettobezero. P Second,fixingY =Y(t+1)andZ=Z(t),weminimizethe augmentedLagrangianfunctionrespecttoStogetS(t+1).Again, 2.3. ImplementationofCentralizedRobustGroupLASSO combiningthelineartermwiththequadratictermofSyields ThecentralizedADMMtosolvetherobustgroupLASSOmodel(4) S(t+1)=argminλ S 1 (6) issummarizedinTableI.EachiterationoftheADMMincludesan S k k inner-loopBCDsubroutinethatupdatesYthroughsolving(5),the β Z(t) + [A y (t+1), ,A y (t+1)]+S M 2. updateofSthathasaclosed-formsolution(7),andtheupdateofZ 2k (1) 1 ··· (L) L − − β kF in(8). TheADMMparameterβ canbeanypositivevalue, though DenotingW(t+1)=M [A y (t+1), ,A y (t+1)] itschoicemayinfluencetheconvergencerate. TheBCDparameter Z(t)/β,(6)hasaclosed-fo−rms(o1l)uti1ongiven·b·y· (L) L − τ issettobetheminimumoflargesteigenvaluesofAT A , l = (l) (l) 1,2, ,LthatguaranteestheconvergenceoftheBCDsubroutine sml(t+1)=sgn(wml(t+1))max 0,|wml(t+1)|− βλ , (7) [13–1·5·]·. As long as τ is properly chosen and P is large enough, theBCDsubroutineisabletosolvethesubproblem(5)withenough (cid:0) (cid:1) wheresgn()isthesignfunction;sml(t+1)andwml(t+1)denote accuracysuchthattheADMMconvergestotheglobalminimumof · the(m,l)-thentriesofS(t+1)andW(t+1),respectively. Note theconvexprogram(4). thattheterm sml(t+1) canbeviewedasthesupportdetectorofthe The algorithm outlined in Table I is centralized, which means | | (m,l)-thelementofS. If sml(t+1) issmallerthanthethreshold thatafusioncenter isnecessary togather informationfromallthe | | λ/β,thensml(t+1)issettobezero. agentsandconductoptimization. Thiscentralizedschemeissensi- Finally,givenY =Y(t+1)andS=S(t+1),theLagrange tivetothefailureofthefusioncenter,requiresmulti-hopcommuni- multiplierZisupdatedaccordingtothefollowingformula cation withinthenetwork, and ishence unscalable withrespect to the networks size. In view of the need of decentralized optimiza- Z(t+1)=Z(t) (8) tionfor large-scalenetworks, wediscusshow toimplement itina −β [A(1)y1(t+1),··· ,A(L)yL(t+1)]+S(t+1)−M . decentralizedmanner,asshowninthenextsection. Sin(cid:0)cetheupdateofSin(7)andtheupdateofZin(8)areb(cid:1)oth simple,nowwefocusontheupdateofYin(5)thatisthebottleneck of the ADMM. Observe that in (5) the ℓ2,1-norm term is separa- 3. DECENTRALIZEDROBUSTGROUPLASSO blewithrespecttoy ’sbutnonsmooth,whiletheFrobeniustermis i smoothbutnonseparablewithrespecttoy ’s. Therefore,inthispa- ObservethatAlgorithm1isnaturallydistributed,exceptfortheup- i perwesolve(5)withtheblockcoordinatedescent(BCD)algorithm date of y (t+ p+1), which involves calculating the global row- nl P thathasshowntobeanefficienttooltohandlethisspecialproblem support detector un(t + p) across agents. Hence, given the structure[13–15]. vector un(t+ pk), thekeyPtokth2e decentralized implementation of P Table1.Algorithm1:CentralizedRobustGroupLASSO Table2.Algorithm2:DecentralizedRobustGroupLASSO Given:measurementM;sensingmatricesA ;parametersβandτ Given:measurementM;sensingmatricesA ;parametersβandτ (l) (l) Initialize:signalY(0)=0;errorS(0)=0;multiplierZ(0)=0 Initialize:signalY(0)=0;errorS(0)=0;multiplierZ(0)=0 whilenotconverged(t=0,1, )forallldo whilenotconverged(t=0,1, )agentldo ··· ··· forp=0,1, ,P 1 forp=0,1, ,P 1 v(t+ p)=··A·T A− y(t+ p)+s(t) m zl(t) v(t+ p)=··A·T A− y(t+ p)+s(t) m zl(t) l P (l) (l) l P l − l− β l P (l) (l) l P l − l− β u (t+ p)=y (t+ p) τv (t+ p), n u (t+ p)=y (t+ p) τv (t+ p), n sewynnlnd(ll(t(ftto++r+11)pP)P+==1)ms=glnn−(klwu(cid:0)yAnnl(((tl(t+)+tyPPp+Plp())kt−12+))m1ma)nxal−x(cid:0)0z,0lβk(,tPu)wn(t∀(+t+Pp1)k)2−λβτ(cid:1)(cid:1),,∀mn ewyhnnnnd(lllt((ftot+r++1)pPPpP+=)1i)ms=upnd√la(cid:0)AyLtenhdln(ttlyP+h(tr+Pop(tu−)Ppg+)hm1a)nnalxav(cid:0)ez0rl,a(tpgP)e,Lcoh∀nnsle(nts+usPpst)ra−t(cid:1)egβτy(cid:1),∀n ml ml | ml |− β ∀ l l− (l) l − β zl(t+1)=zl(t)−β A(l)yl(t+1)(cid:0)+sl(t+1)−ml (cid:1) sml(t+1)=sgn(wml(t+1))max 0,|wml(t+1)|− βλ ,∀m endwhile z(t+1)=z(t) β A y(t+1)+s(t+1) m (cid:0) (cid:1) l l − (l) l (cid:0) l − l (cid:1) endwhile (cid:0) (cid:1) Algorithm1ishowtocalculateitsℓ -norm un(t+ p) inade- 2 k P k2 centralizedmanner.Recallthat 3.2. DynamicAverageConsensus 1 un(t+p) =L21 1 L u2 (t+ p) 2 = Lh (t+ p) 12 , The above-mentioned dilemma motivates us to introduce a new k P k2 L nl P ! nl P schemetodynamicallycalculatetherow-supportdetector. Tosim- Xl=1 (cid:16) (cid:17) plify the algorithmic protocol, we allow neighboring agents to where exchange only one round of information. Under this setting, ev- p 1 L p ery agent holds a dynamic value u2 (t+ p), while all theagents h (t+ ), u2 (t+ ) nl P nl P L nl P managetotracktheirdynamicaveragewithoneroundofcommuni- istheaverageofthesquares.ThereXfl=or1e,theproblembecomes:Sup- tchaetioang.enAtspphaarveentlnyo, icfhtahnecveatlouerseoacfhu2nthl(etir+exPpac)tcdhyannagmeiicrreagvuelraargley., spiogsneeefaficchieangtesntrtaltehgoieldsstoth(eexvaacltulyeoorfiune2nxl(atct+ly)Ppca)l,chuolawtectahneiwrmeedaen- Nevertheless, observe that if the values of u2nl(t + Pp) converge to their steady states, convergence of the dynamic average will be hnl(t+ Pp) in a decentralized manner? Below we consider three possible. We consider two dynamic average consensus strategies approachestoobtaintheaverage. proposedby[18]. First-orderdynamicaverageconsensus.Calculate 3.1. StaticAverageConsensus p p 1 p 1 The first strategy comes from the classic average consensus algo- hnl(t+ P)= σrl hnr(t+ P− )−hnl(t+ −P ) rithm[17].Calculate Xr6=l (cid:18) (cid:19) hn(t+ Pp)=ΣK un(t+ Pp) 2, +hnl(t+ pP−1)+u2nl(t+ Pp)−u2nl(t+ pP−1). (cid:16) (cid:17) wherehn(t+p) 1×Lisarowvectorcontainingallh (t+p), Second-orderdynamicaverageconsensus.Calculate P ∈R nl P un(t+ p) 2 meanselement-wisesquaresofun(t+ p),K isa p p p 1 p 2 (cid:0)large iteraPtio(cid:1)n number, and Σ is the mixing matrix. TPhe mixing h˜nl(t+ P)=u2nl(t+ P)−2u2nl(t+ −P )+u2nl(t+ −P ) matrixΣisdoublystochastic,andits(r,l)-thelementσ isnonzero rl Mifaentrdopoonlliys-iHfa(srt,iln)gs∈ruEleor[1r7]=, l. AtypicalchoiceofΣfollowsthe +h˜nl(t+pP−1)+ σrl ˜hnr(t+ p−P 1)−h˜nl(t+ p−P 1) , Xr6=l (cid:18) (cid:19) min 1 , 1 , if(r,l) ; p p σrl= P0,(r{,l)d∈rEmdla}xn0,d1r − d1lo, ieflsre.=l;∈E (12) h+nlh(tnl+(tP+)p=P−˜h1n)l+(t+ Pσr)l hnr(t+ p−P 1)−hnl(t+ p−P 1) . Heredlisthedegreeofagentl. Xr6=l (cid:18) (cid:19) Obviously, the graph-sparse structure of the mixing matrix Σ enables decentralized computation of hn(t + p). According to P 3.3. ImplementationofCentralizedRobustGroupLASSO the theory of average consensus [17], if K goes to infinity, then all the elements of hn(t + p) converge to the expected average The decentralized group LASSOalgorithm isoutlined in Table II. P (1/L) L u2 (t+ p), in which the decentralized implementa- It is very close to the centralized algorithm in Table I, except that l=1 nl P tionisequivalenttoitscentralizedcounterpart.However,increasing therow-supportdetectorissuccessivelyapproximatedthroughstatic P Kmeansintroducingmoreroundsofcommunicationandcomputa- anddynamicaverageconsensusstrategies. tion,implyingthatsettingKlargeisinefficient. Ontheotherhand, Ifthestaticaverageconsensusstrategyisadopted,thenattime settingKsmall(say,K =1)oftenleadstounsatisfactoryresult. tslotp,thenetworkneedsKroundsofinformationexchange. The 100 100 100 Connectivity ratio = 0.1 Connectivity ratio = 0.1 Connectivity ratio = 0.1 Connectivity ratio = 0.2 Connectivity ratio = 0.2 Connectivity ratio = 0.2 Connectivity ratio = 0.3 Connectivity ratio = 0.3 Connectivity ratio = 0.3 10−1 10−1 10−1 Relative error10−2 Relative error10−2 Relative error10−2 10−3 10−3 10−3 10−40 500 1000 1500 2000 10−40 500 1000 1500 2000 10−40 500 1000 1500 2000 Number of iterations Number of iterations Number of iterations Fig.2. Impactofconnectivityratioontheconvergenceofdecentralizedalgorithms: staticaverageconsensus(Left),second-orderdynamic averageconsensus(Middle),andfirst-orderdynamicaverageconsensus(Right). sparsesignalmatrixY 200×30 has10nonzerorows(rowspar- 100 sityratiois5%), whose∈pRositions are uniformly randomly chosen. Centralized Decentralized static average Theamplitudesofthenonzeroelementsfollowi.i.d.uniformdistri- Decentralized 2nd order dynamic average butionwithin[ 50,50]. ElementsofeverysensingmatrixA 10−1 Decentralized 1st order dynamic average 30×200follow−i.i.d.standardnormaldistribution.Thesparse(elr)ro∈r RmatrixS 30×30has90nonzeroelements(sparsityratiois10%), or whosepo∈sitRionsareuniformlyrandomlychosenandtheamplitudes Relative err10−2 TfohleloIwAnDtih.Mie.drM.oubnupisaftroagrmrmoeudtpeirsLtβrAibSiusStiaOolnsmowosidethetlia,nst[h−1e.5w0Te,ih5ge0h]tB.pCarDampeatrearmλet=er1τ. is set to be the minimum of largest eigenvalues of AT A , l = 10−3 (l) (l) 1,2, ,L.EveryiterationoftheADMMalgorithmisdividedinto ··· P =50slotssoastoruntheBCDsubroutine.Forthestaticaverage consensusstrategy,weletK = 50,meaningthateachslotrequires 10−4 50 rounds of communication. For thedynamic average consensus 0 500 1000 1500 2000 Number of iterations strategies, we let the safeguards h = 1 and h = . The min max ∞ performance metric is relative error, defined as the Frobenius dis- Fig.1. Comparisonbetweenthecentralizedalgorithmandthethree tancebetweenthetrue[YT ST]solving(4)andtheestimatedone decentralizedones.Thecurveofthecentralizedalgorithmcoincides byADMM,normalizedbytheFrobeniusnormof[YT ST]. Wefirstcomparethecentralizedalgorithmandthethreedecen- withthatusingstaticaverageconsensus. tralized ones, as depicted in Fig. 1. The connectivity ratio of the network(thepercentageofrandomlyconnectededgesoutofallpos- sibleones)is50%.Thecurveofthecentralizedalgorithmcoincides number of round reduces to one in the two dynamic average con- withthatusingstaticaverageconsensus. Recallthatstaticaverage sensusstrategies. Observethatineachroundoffirst-orderdynamic consensus incurs 50 round of communications at every time slot, average consensus, agent l requires h from all of its neighbors nr andishenceexpensive. Incontrast,thedynamicaverageconsensus r. However, ineach round of second-order dynamic average con- sensus,agentlrequiresbothh andh˜ fromallofitsneighbors strategies demonstrate satisfactory convergence properties, though nr nr yieldingslightlydegradedestimates. Particularly,thesecond-order r. Therefore,thesecond-orderstrategydoublesthecommunication dynamicaverageconsensusisclosetothecentralizedoneinterms costpertimeslot,comparedtoitsfirst-ordercounterpart. oftherelativeerror. With particular note, when K is set to be large enough in the Inthesecondsetofnumericalexperiments,wevarytheconnec- static average consensus strategy, the average consensus is exact. tivityratiotoobserveitsimpactonthedecentralizedalgorithms,as Therefore, the resulting decentralized algorithm enjoys the same shown in Fig. 2. When the connectivity ratio decreases, the per- convergence guarantee as the centralized one, at the cost of unaf- formanceofthestaticaverageconsensusdegradessignificantly.The fordablecommunicationcost. Embeddingthetwodynamicaverage reasonisthatalowerconnectivityratioreducesthespeedofnetwork consensus strategies saves remarkable communication cost, but information fusion, and hence makes the static average consensus makesconvergenceanalysisachallengingtask. Wewillleaveitas lessaccurateunderagivenK. Thetwodynamicaverageconsensus ourfuturework. strategies,ontheotherhand, arenotverysensitivetothevariation Inaddition,toavoidpossiblecomputationalinstability,wealso setsafeguardstothevalueofh (t+p).Ifgoingbeyondtheregion ofconnectivityratio. nl P The numerical experiments validate the effectiveness of using of[h ,h ],itsvalueissettothenearestboundary. min max dynamic average consensus to decentralize computation over net- works. Thoughitstheoreticalpropertiesintrackingproblemshave 4. NUMERICALEXPERIMENTS been investigated [18], its interplay with the overall optimization schemeisstillunclear,andshallbeourfutureresearchfocus. In the numerical experiments, we consider a network of L = 30 agents. The dimension of every signal vector is N = 200, while Acknowledgement. QingLingissupportedinpartbyNSFChina thedimensionofeverymeasurementvectorisM =30. Thegroup grant61573331andNSFAnhuigrant1608085QF130. 5. REFERENCES [1] Y.Eldar,P.Kuppinger,andH.Bo¨lcskei,“Block-sparsesignals: Uncertainty relations and efficient recovery,” IEEE Transac- tions on Signal Processing, vol. 58, no. 6, pp. 3042–3054, 2010. [2] M.E.DavisandY.C.Eldar,“Rankawarenessinjointsparse recovery,”IEEETransactionsonInformationTheory,vol.58, no.2,pp.1135-146,2012. [3] D.Malioutov,M.C¸etin,andA.S.Willsky,“Asparsesignalre- constructionperspectiveforsourcelocalizationwithsensorar- rays,”IEEETransactionsonSignalProcessing,vol.53,no.8, pp.3010–3022,2005. [4] X.Wei,Y.Yuan,andQ.Ling,“DOAestimationusingagreedy block coordinate descent algorithm,” IEEE Transactions on SignalProcessing,vol.60,no.12pp.6382–6394,2012. [5] F.Zeng,C.LiandZ.Tian,“Distributedcompressivespectrum sensing in cooperative multihop cognitive networks,” IEEE JournalofSelectedTopicsinSignalProcessing,vol.5,no.2, pp.37–48,2011. [6] J. Meng, W. Yin, H. Li, E. Hossain, and Z. Han, “Collabo- rativespectrumsensingfromsparseobservationsincognitive radionetworks,”IEEEJournal onSelectedAreasinCommu- nications,vol.29,no.2,pp.327–337,2011. [7] J.A.Bazerque,G.Mateos,andG.B.Giannakis,“Group-lasso on splines for spectrum cartography,” IEEE Transactions on SignalProcessing,vol.59,no.10,pp.4648–4663,2011. [8] Z. Gao, L. Cheong, and Y. Wang, “Block-sparse RPCA for salientmotiondetection,”IEEETransactionsonPatternAnal- ysisandMachineIntelligence,vol.36,no.10,pp.1975–1987, 2014. [9] M.YuanandY.Lin,“Modelselectionandestimationinregres- sionwithgrouped variables,” Journal oftheRoyalStatistical Society,SeriesB,vol.68,no.1,pp.49–67,2007. [10] E.Dall’Anese, J.A.Bazerque, and G.B.Giannakis, “Group sparselassoforcognitivenetworksensingrobusttomodelun- certaintiesandoutliers,”PhysicalCommunication, vol.5,no. 2,pp.161–172,2012. [11] X.Wei,Q.Ling,andH.Zhu,“Recoverabilityofgroupsparse signalsfromcorruptedmeasurementsviarobustgrouplasso,” preprint at http://arxiv.org/abs/1509.08490v1, 2015. [12] D. Bertsekas, Nonlinear Programming, Second Edition, AthenaScientific,1999. [13] P.TsengandS.Yun,“Acoordinategradientdescentmethodfor nonsmooth separable minimization.”Mathematical Program- ming,vol.117,no.1–2,pp.387–423,2009. [14] J.YangandY.Zhang,“Alternatingdirectionalgorithmsforℓ 1 problemsincompressivesensing,”SIAMJournalonScientific Computing,vol.38,no.1,pp.250–278,2011. [15] M.Razaviyayn,M.HongandZ.Luo,“Aunifiedconvergence analysis of block successive minimization methods for nons- moothoptimization,”SIAMJournalonOptimization,vol.32, no.2,pp.1126–1153,2013. [16] A. Beck and M. Teboulle, “A fast iterative shrinkage- thresholding algorithm for linear inverse problems,” SIAM JournalonImagingSciences,vol.2,no.1,pp.183–202,2009. [17] S. Boyd, P. Diaconis, and L. Xiao, “Fastest mixing Markov chainonagraph,”SIAMReview,vol.46,no.4,pp.667–689, 2004. [18] M. Zhu and S. Martinez, “Discrete-time dynamic average cosensus,”Automatica,vol.46,no.2,pp.322–329,2010.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.