ITERATIVEBLOCKTENSORSINGULARVALUETHRESHOLDINGFOREXTRACTION OFLOWRANKCOMPONENTOFIMAGEDATA LongxiChen YipengLiu CeZhu SchoolofElectronicEngineering/CenterforRobotics/CenterforInformationinMedicine UniversityofElectronicScienceandTechnologyofChina(UESTC),Chengdu,611731,China emails: [email protected],{yipengliu,eczhu}@uestc.edu.cn 7 1 ABSTRACT 0 2 Tensorprincipalcomponentanalysis(TPCA)isamulti-linearextensionofprincipalcomponentanalysiswhichconvertsasetofcor- related measurements into several principal components. In this paper, we propose a new robust TPCA method to extract the principal n componentsofthemulti-waydatabasedontensorsingularvaluedecomposition. Thetensorissplitintoanumberofblocksofthesame a size. The low rank component of each block tensor is extracted using iterative tensor singular value thresholding method. The principal J componentsofthemulti-waydataaretheconcatenationofallthelowrankcomponentsofalltheblocktensors. Wegivetheblocktensor 5 incoherenceconditionstoguaranteethesuccessfuldecomposition. Thisfactorizationhassimilaroptimalitypropertiestothatoflowrank 1 matrixderivedfromsingularvaluedecomposition. Experimentally,wedemonstrateitseffectivenessintwoapplications,includingmotion separationforsurveillancevideosandilluminationnormalizationforfaceimages. ] V C IndexTerms— tensorprincipalcomponentanalysis,tensorsingularvaluedecomposition,lowranktensorapproximation,blocktensor . s 1. INTRODUCTION c [ Thehigh-dimensionaldata,alsoreferredtoastensors,arisenaturallyinanumberofscenarios,includingimageandvideoprocessing,and 1 datamining[1]. However,mostofthecurrentprocessingtechniquesaredevelopedfortwo-dimensionaldata[2]. Theprincipalcomponent v analysis(PCA)isoneofthemostwidelyusedoneintwo-dimensionaldataanalysis[3]. 3 TherobustPCA(RPCA),asanextensionofPCA,isaneffectivemethodinmatrixdecompositionproblems[4]. Supposewehavea 4 matrixX ∈Rn1×n2 ,whichcanbedecomposedasX =L0+S0,whereL0isthelowrankcomponentofthematrixandS0isthesparse 0 component. TheRPCAmethodhasbeenappliedtoimagealignment[5],surveillancevideoprocessing[6],illuminationnormalizationfor 4 faceimages[7]. Inmostapplications,theRPCAmethodshouldflattenorvectorizethetensordatasoastosolvetheprobleminthematrix. 0 Itdoesn’tusethestructuralfeatureofthedataeffectivelysincetheinformationlossinvolvesintheoperationofmatricization. . 1 Tensor robust principal component analysis (TRPCA) has been studied in [8, 9] based on the tensor singular value decomposition (t- 0 SVD).Theadvantageoft-SVDovertheexistingmethodssuchascanonicalpolyadicdecomposition(CPD)[10]andTuckerdecomposition 7 [11]isthattheresultinganalysisisvernyclosetothatofmatrixanalysis[12]. Similarly,supposewearegivenatensorX ∈Rn1×n2×n3 and 3 1 itcanbedecomposedintolowrankcomponentandsparsecomnponent.Wecnanwriteitas n n 3 n 3 n 3 v: n = n X =+Ln0+S0, + +n (1) i 1 c c c X whereL denotesthelowrankcomponent,andS isthespa1rsecomponen2tofthetensor. Fnig. 1istheillustrationforTRPCA.In[8]the 0 0 r problem(1)istransformedtotheconvexonptimizationmodel: 2 a min(cid:107)L (cid:107) +λ(cid:107)S (cid:107) , s.t.X =L +S , (2) 0 ∗ 0 1 0 0 L0,S0 where(cid:107)L (cid:107) isthetensornuclearnorm(seesection2forthedefinition),(cid:107)S (cid:107) denotesthe(cid:96) -norm. Inthepaper[9]theproblem(1)is 0 ∗ 0 1 1 transformedtoanotherconvexoptimizationmodelas: min(cid:107)L (cid:107) +λ(cid:107)S (cid:107) , s.t.X =L +S , (3) 0 ∗ 0 1,1,2 0 0 L0,S0 = + Fig.1:IllustrationofTRPCA. I I J ,I J I J 1 3 3 1 1 3 3 I J 2 2 J I 1 1 I I J 2 2 2 Fig.2:Illustrationoftheconcatenationofblocktensors. 1,1 1,2 1,q n 3 n 2,1 2,2 2,q 1 n 2 z,1 z,2 z,q Fig.3:Illustrationoftheblocktensordecompositionmodel. where(cid:107)S (cid:107) isdefinedasΣ (cid:107)S (i,j,:)(cid:107) .Thetwomethodssolvethetensordecompositionproblembasedonthet-SVD. 0 1,1,2 i,j 0 F The lowrank andsparse matrix decompositionhas beenimproved by the[13]. The mainidea is incorporating multi-scale structures withlowrankmethods.Theadditionalmulti-scalestructurescanobtainamoreaccuraterepresentationthanconventionallowrankmethods. Inspiredbythiswork,wenoticethatthesparsecomponentinmatrixisblock-distributedinsomeapplications,e.g. shadowandmotionin videos.Fortheseimageswefinditismoreeffectivetoextractthelowrankcomponentsinaanothersmallerscaleofimagedata.Herewetry toextractlowrankcomponentsinblocktensordatathatisstackedbysmallscaleofimagedata.Andwhenwedecomposethetensordatainto manysmallblocks,itiseasytoextracttheprincipalcomponentinsomeblocksthathavefewsparsecomponents. Wemodelourtensordata astheconcatenationofblocktensorsinsteadofsolvingtheRPCAproblemasawholebigtensor.Fig.2istheillustrationofconcatenationof blocktensors. Basedontheabovemotivation,wedecomposethewholetensorintoconcatenationofblocksofthesamesize,thenweextractlowrank componentofeachblockbyminimizingthetubalrankofeachblocktensor. Fig. 3istheillustrationofourmethod. Andwegetlowrank componentofthewholetensorbyconcatenatingallthelowrankcomponentsoftheblocktensors.Theproposedmethodcanbeusedtosome conventionalimageprocessingproblems,includingmotionseparationforsurveillancevideos(Section4.1)andilluminationnormalization forfaceimages(Section4.2).Theresultsofnumericalexperimentsdemonstratethatourmethodoutperformstheexistingmethodsintermof accuracy. 2. NOTATIONSANDPRELIMINARIES Inthissection,wedescribethenotationsanddefinitionsusedthroughoutthepaperbriefly[14,15,16,17]. Athird-ordertensorisrepresentedasA,andits(i,j,k)-thentryisrepresentedasA .A(i,j,:)denotesthe(i,j)-thtubalscalar.A(i,: i,j,k ,:),A(:,j,:)andA(:,:,k)arethei-thhorizontal,j-thlateralandk-thfrontalslices,respectively.(cid:107)A(cid:107) = (cid:113)(cid:80) |a |2 and(cid:107)A(cid:107) = F i,j,k ijk ∞ max |a |tensorkindsoftensornorms. i,j,k ijk Wecanviewathree-dimensionaltensorofsizen ×n ×n asann ×n matrixoftubes. Aˆisatensorwhichisobtainedbytaking 1 2 3 1 2 thefastFouriertransform(FFT)alongthethirdmodeofA.ForacompactnotationwewilluseAˆ=fft(A,[],3)todenotetheFFTalongthe thirddimension.Inthesameway,wecanalsocomputeAfromAˆusingtheinverseFFT(IFFT). Definition2.1(t-product)[12]Thet-productE =A∗Bof A∈Rn1×n2×n3 andB∈Rn2×n4×n3 isan n1×n4×n3tensor.The(i,j)-th tubeof E isgivenby (cid:88)n2 E(i,j,:)= A(i,k,:)•B(k,j,:), (4) k=1 where•denotesthecircularconvolutionbetweentwotubesofsamesize. Definition2.2(conjugatetranspose)[14]TheconjugatetransposeofatensorAofsizen ×n ×n isthen ×n ×n tensorAT 1 2 3 2 1 3 obtainedbyconjugatetransposingeachofthefrontalsliceandthenreversingtheorderoftransposedfrontalslicesfrom2ton . 3 Definition2.3(identitytensor)[14]TheidentitytensorI ∈Rn×n×n3 isatensorwhosefirstfrontalsliceisthen×nidentitymatrixand allotherfrontalslicesarezero. Definition2.4(orthogonaltensor)[14]AtensorQisorthogonalifitsatisfies QT∗Q=Q∗QT =I. (5) Definition2.5(f-diagonaltensor)[14]Atensoriscalledf-diagonalifeachofitsfrontalslicesisadiagonalmatrix. Definition2.6(t-SVD)[14]ForA∈Rn1×n2×n3 ,thet-SVDof Aisgivenby A=U∗S∗VT (6) whereU andVareorthogonaltensorsofsizen ×n ×n andn ×n ×n respectively,andSisaf-diagonaltensorofsizen ×n ×n . 1 1 3 2 2 3 1 2 3 We can obtain this decomposition by computing matrix singular value decomposition (SVD) in the Fourier domain, as it shows in Algorithm1.Fig.4illustratesthedecompositionforthethree-dimensionalcase. Algorithm1:t-SVDfor3-waydata Input:A∈Rn1×n2×n3 D←fft(A,[],3), fori=1ton ,do 3 [U,S,V]=svd(D(:,:,i)), Uˆ(:,:,i)=U, Sˆ(:,:,i))=S, Vˆ(:,:,i)=V, endfor U ←ifft(Uˆ,[],3), S ←ifft(Sˆ,[],3), V ←ifft(Vˆ,[],3), Output:U,S,V n n n n 3 3 3 3 = * * n n n n 2 1 1 1 n n n n 2 2 1 2 Fig.4:Illustrationofthet-SVDofann ×n ×n tensor. 1 2 3 Definition2.7(tensormulti-rankandtubalrank)[9]Thetensormulti-rankof A ∈ Rn1×n2×n3 isavectorr ∈ Rn3 withitsi-thentry astherankofthei-thfrontalsliceof Aˆ,i.e.r =rank(Aˆ(:,:,i)).Thetensortubalrank,denotedbyrank (A),isdefinedasthenumberof i t nonzerosingulartubesof S,whereSisfrom A=U∗S∗VT,i.e. rank (A)=#{i:S(i,i,:)(cid:54)=0}=maxr (7) t i i Definition2.8(tensornuclearnorm: TNN)[9]ThetensornuclearnormofA∈Rn1×n2×n3 denotedby(cid:107)A(cid:107)∗ isdefinedasthesumof thesingularvaluesofallthefrontalslicesof Aˆ. TheTNNof A isequaltotheTNNof blkdiag(Aˆ). Hereblkdiag(Aˆ)isablockdiagonal matrixdefinedasfollows: Aˆ(1) Aˆ(2) blkdiag(Aˆ)= ... , (8) Aˆ(n3) whereAˆ(i)isthei-thfrontalsliceof Aˆ,i=1,2,...,n . 3 Definition2.9(standardtensorbasis)[12]Thecolumnbasis,denotedas˚e ,isatensorofsizen×1×n withits(i,1,1)-thentryequaling i 3 to1andtherestequalingto0.Naturallyitstranspose˚eTiscalledrowbasis. i 3. ITERATIVEBLOCKTENSORSINGULARVALUETHRESHOLDING Wedecomposethewholetensorwhichsatisfiestheincoherenceconditionsintomanysmallblocksofthesamesize.Andthethirddimension of the block size should be the same as the third dimension of the tensor. That is to say, given an input tensor X ∈ Rn1×n2×n3 and its correspondingblocksize, weproposeamulti-blocktensormodelingthatmodelsthetensordataX astheconcatenationofblocktensors. Andeachblocktensorcanbedecomposedintotwocomponents,i.e. X =L +S , p=1,··· ,P,whereL andS denotethelowrank p p p p p componentandsparsecomponentofblocktensorX respectively. p AsobservedinRPCA,thelowrankandsparsedecompositionisimpossibleinsomecases[4]. Similarly,wearenotabletoidentifythe lowrankcomponentandsparsecomponentifthetensorisofbothlowrankandsparsity.Similartothetensorincoherenceconditions[8],we assumetheblocktensordataL ineachblocksatisfiessomeblocktensorincoherenceconditionstoguaranteesuccessfullowrankcomponent p extraction. Definition 3.1 (block tensor incoherence conditions) For Lp ∈ Rn×n×n3, assume that rankt(Lp) = r and it has the t-SVD Lp = Up∗Sp∗VpT,whereUp ∈Rn×r×n3 andVp ∈Rn×r×n3 satisfyUpT∗Up =IandVpT∗Vp =I,andSp ∈Rr×r×n3 isanf-diagonaltensor. ThenL satisfiesthetensorincoherenceconditionswithparameterµif p (cid:114) µr max (cid:107)UT∗˚e (cid:107) (cid:54) (9) i=1,...,n p i F nn3 (cid:114) µr max (cid:107)VT∗˚e (cid:107) (cid:54) (10) j=1,...,n p j F nn3 and (cid:114) µr (cid:107)U ∗VT(cid:107) (cid:54) (11) p p ∞ n2n2 3 Theincoherenceconditionguaranteesthatforsmallvaluesofµ,thesingularvectorsarenotsparse. ThenthetensorLp ∈ Rn×n×n3 canbe decomposedintolowrankcomponentandsparsecomponent. Forextractingthelowrankcomponentfromeveryblock,weprocessthetensornuclearnormofL ,i. e. (cid:107)L (cid:107) =(cid:107)blkdiag(Lˆ )(cid:107) . p p TNN p ∗ HerewecanusesingularvaluethresholdingoperatorintheFourierdomaintoextractthelowrankcomponentoftheblocktensor[18,19]. Theproposedmethodiscallediterativeblocktensorsingularvaluethresholding(IBTSVT).Thethresholdingoperatorusedhereisthesoft oneD asfollows: τ D (L )=sign(blkdiag(Lˆ ))(|blkdiag(Lˆ )|−τ) , (12) τ p p p + where“() ”keepsthepositivepart. + AfterweextractthelowrankcomponentL = L (cid:1)L (cid:1)···(cid:1)L ,where(cid:1)denotesconcatenationoperation,wecangetthesparse 1 2 P componentofthetensorbycomputingtheS =X −L.SeeAlgorithm2indetails. Algorithm2:IBTSVT Input:tensordataX ∈Rn1×n2×n3 Initialize:givenµ,η,(cid:15),τ,and blocktensorsX ofsizen×n×n ,p=1,··· ,P p 3 whilenotconvergeddo 1.Updateη:=η×µ, 2.Updateτ :=τ/η, 3.ComputeX :=D (X ),p=1,··· ,P. p τ p endwhile: (cid:107)X −X (cid:107) /(cid:107)X (cid:107) ≤(cid:15)at(k+1)-thstep. k+1 k F k F Output:L=X (cid:1)X (cid:1)···(cid:1)X 1 2 P Inourmethod,theblocksizecan’tbetoolarge.Thelargesizeoftheblockwillmakethesparsepartcontainsomelowrankcomponent. Andifthesizeoftheblockistoosmall,thecomputationaltimewillbelong. Becausethenumberoft-SVDsislarge. Generally,wecan chooseourblocksize2×2×n . 3 Inouralgorithm,wechooseµ = 1.8,η = 1,(cid:15) = 10−2. Butthethresholdingparameterτ isdifficulttodetermine. Herewecanget √ avaluebyexperience. Asdiscussedin[8],thethresholdingparametercouldbeτ = 1/ nn foreveryblock. Thisvalueisfordenoising 3 √ probleminimagesorvideos,wherethenoiseisuniformlydistributed. Butfordifferentapplications,itshouldbedifferentfrom1/ nn . 3 Because in these applications, the sparse component in data is not uniformly distributed, such as shadow in face images and motion in surveillancevideos. 4. EXPERIMENTALRESULTS Inthissection,weconductnumericalresultstoshowtheperformanceofthemethod.WeapplyIBTSVTmethodontwodifferentrealdatasets thatareconventionallyusedinlowrankmodel: motionseparationforsurveillancevideos(Section4.1)andilluminationnormalizationfor faceimages(Section4.2). 4.1. MotionSeparationforSurveillanceVideos Insurveillancevideo,thebackgroundonlychangesitsbrightnessoverthetime,anditcanberepresentedasthelowrankcomponent. And theforegroundobjectsarethesparsecomponentinvideos. Itisoftendesiredtoextracttheforegroundobjectsfromthevideo. Weusethe proposedIBTSVTmethodtoseparatetheforegroundcomponentfromthebackgroundone. We use the surveillance video data used in [6]. Each frame is of size 144×176 and we use 20 frames. The constructed tensor is √ X ∈R144×176×20andtheselectedblocksizeis2×2×20.Thethresholdingparameterisτ =20/ nn . 3 Fig. 5 shows one of the results. We can find that IBTSVT method correctly recovers the background, while the sparse component correctlyidentifiesthemovingpedestrians.Itshowstheproposedmethodcanrealizemotionseparationforsurveillancevideos. Fig.5: IBTSVTonasurveillancevideo. (a)originalvideo; (b)lowrankcomponentthatisvideobackground; (c)sparsecomponentthat representstheforegroundobjectsofvideo. 4.2. Illuminationnormalizationforfaceimages Thefacerecognitionalgorithmsaresensitivetoshadoworocclusionsonfaces[7],andit’simportanttoremoveilluminationvariationsand shadowonthefaceimages.Thelowrankmodelisoftenusedforfaceimages[20]. Inourexperiments,weusetheYaleBfacedatabase[7].Eachfaceimageisofsize192×168with64differentlightingconditions.We √ constructthetensordataX ∈R192×168×64andchoosetheblocksize2×2×64.Wesetthethresholdingparameterτ =20/ nn . 3 Wecomparetheproposedmethodwithmulti-scalelowrankmatrixdecompositionmethod[13]andlowrank+sparsemethod[4]. Fig. 6showsoneofthecomparisonresults. TheIBTSVTmethodcanresultinalmostshadow-freefaces. Incontrast,theothertwomethodscan onlyrecoverthefaceswithsomeshadow. Inordertofurtherillustratetheeffectofshadoweliminationintherecoveredfaceimages,wecarryonfacedetectionwiththerecovered datafromdifferentmethods. Inourexperiments,weemploythefacedetectionalgorithmViola-Jonesalgorithm[21]todetectthefacesand theeyes.TheViola-Jonesalgorithmisaclassicalalgorithmwhichcanbeusedtodetectpeople’sfaces,noses,eyes,mouths,andupperbodies. InthefirstexperimentweputallfaceimagesintooneimageofJPGformat. Thenweusethealgorithmtodetectfacesinthenewlyformed image.Inthesecondexperiment,weusethealgorithmtodetecttheeyesofeveryfaceimage.ThesecondandthirdcolumnsofTable1show thedetectionaccuracyratiosofViola-Jonesalgorithmwithdifferentrecoveredfaceimages. Wetesthowlongthethreemethodsprocessthe 64faceimagesascanbeseeninthefourthcolumn.TheIBTSVTcanimprovetheefficiencybyparallelprocessingoftheblocktensors.From theresultofTable1,wecanfindourmethodgivesthebestdetectionperformance,becauseremovingshadowoffaceimagesishelpfulfor facedetection. facedetection eyedetection Time(s) Originalimage 0.297 0.58 NULL Lowrank+sparsity 0.375 0.70 10 Multiscalelowrank 0.359 1.00 4472 IBTSVT 0.844 1.00 715 Table1:TheaccuracyratiosoffacesandeyesdetectionbyViola-Jonesalgorithmandthecomputationaltimetoprocessfaceimages. Fig.6: Threemethodsforfacewithunevenillumination: (a)originalfaceswithshadows;(b)lowrank+sparsemethod;(c)multi-scalelow rankdecomposition;(d)IBTSVT. 5. CONCLUSIONS Inthispaper,weproposedanovelIBTSVTmethodtoextractthelowrankcomponentofthetensorusingt-SVD.TheIBTSVTisagoodway toutilizethestructuralfeatureoftensorbysolvingTPCAprobleminblocktensorform.Wehavegiventhetensorincoherenceconditionsfor blocktensordata.Forapplications,weconsideredmotionseparationforsurveillancevideosandilluminationnormalizationforfaceimages, andnumericalexperimentsshoweditsperformancegainscomparedwiththeexistingmethods. 6. ACKNOWLEDGMENT ThisworkissupportedbyNationalHighTechnologyResearchandDevelopmentProgramofChina(863,No. 2015AA015903),National NaturalScienceFoundationofChina(NSFC,No. 61602091,No. 61602091),theFundamentalResearchFundsfortheCentralUniversities (No. ZYGX2015KYQD004, No. ZYGX2015KYQD004), and a grant from the Ph.D. Programs Foundation of Ministry of Education of China(No.20130185110010). 7. REFERENCES [1] T.G.KoldaandJ.Sun, “Scalabletensordecompositionsformulti-aspectdatamining,” inIEEEInternationalConferenceonData Mining.IEEE,2008,pp.363–372. [2] J.Friedman,T.Hastie,andR.Tibshirani,Theelementsofstatisticallearning,vol.1,SpringerseriesinstatisticsSpringer,Berlin,2001. [3] I.Jolliffe, PrincipalComponentAnalysis, NewYork:Springer,2002. [4] E.J.Cande`s,X.Li,Y.Ma,andJ.Wright, “Robustprincipalcomponentanalysis,” JournaloftheACM,vol.58,no.3,pp.1–73,2011. [5] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “RASL: robust alignment by sparse and low-rank decomposition for linearly correlatedimages,” IEEETransactionsonPatternAnalysisandMachineIntelligence,vol.34,no.11,pp.2233–2246,2010. [6] L. Li, W. Huang, I. Y. H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE TransactionsonImageProcessing,vol.13,no.11,pp.1459–72,2004. [7] A.S.Georghiades,P.N.Belhumeur,andD.J.Kriegman, “Fromfewtomany: Illuminationconemodelsforfacerecognitionunder variablelightingandpose,” IEEETransactionsonPatternAnalysisandMachineIntelligence,vol.23,no.6,pp.643–660,2001. [8] C.Lu,Y.ChenJ.Feng,W.Liu,Z.Lin,andS.Yan, “Tensorrobustprincipalcomponentanalysis:Exactrecoveryofcorruptedlow-rank tensorsviaconvexoptimization,” inTheIEEEConferenceonComputerVisionandPatternRecognition,June2016. [9] Z.Zhang,G.Ely,S.Aeron,N.Hao,andM.Kilmer, “Novelmethodsformultilineardatacompletionandde-noisingbasedontensor- svd,” ComputerScience,vol.44,no.9,pp.3842–3849,2014. [10] A.Cichocki,D.Mandic,L.DeLathauwer,G.Zhou,Q.Zhao,C.Caiafa,andH.A.Phan, “Tensordecompositionsforsignalprocessing applications:Fromtwo-waytomultiwaycomponentanalysis,” IEEESignalProcessingMagazine,vol.32,no.2,pp.145–163,2014. [11] L.DeLathauwer,B.DeMoor,andJ.Vandewalle, “Amultilinearsingularvaluedecomposition,” SIAMJournalonMatrixAnalysisand Applications,vol.21,no.4,pp.1253–1278,2000. [12] Z.ZhangandS.Aeron, “Exacttensorcompletionusingt-svd,” arXivpreprintarXiv:1502.04689,2015. [13] F.OngandM.Lustig, “Beyondlowrank+sparse: Multiscalelowrankmatrixdecomposition,” IEEEJournalofSelectedTopicsin SignalProcessing,vol.10,no.4,pp.672–687,2015. [14] M.E.KilmerandC.D.Martin, “Factorizationstrategiesforthird-ordertensors,” LinearAlgebraanditsApplications,vol.435,no.3, pp.641–658,2011. [15] K.Braman, “Third-ordertensorsaslinearoperatorsonaspaceofmatrices,” LinearAlgebraandItsApplications,vol.433,no.7,pp. 1241–1253,2010. [16] M. E. Kilmer, E. Misha, K. Braman, N. Hao, and R. C. Hoover, “Third-order tensors as operators on matrices: a theoretical and computationalframeworkwithapplicationsinimaging,” SIAMJournalonMatrixAnalysisandApplications,vol.34,no.1,pp.148– 172,2013. [17] T.G.KoldaandB.W.Bader, “Tensordecompositionsandapplications,” SIAMReview,vol.66,no.4,pp.294–310,2005. [18] J.F.Cai,E.J.Cande`s,andZ.Shen, “Asingularvaluethresholdingalgorithmformatrixcompletion,” SIAMJournalonOptimization, vol.20,no.4,pp.1956–1982,2008. [19] G.A.Watson, “Characterizationofthesubdifferentialofsomematrixnorms,” LinearAlgebraandItsApplications,vol.170,no.6,pp. 33–45,1992. [20] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.25,no.2,pp.218–233,2003. [21] P.ViolaandM.Jones,“Rapidobjectdetectionusingaboostedcascadeofsimplefeatures,”inComputerVisionandPatternRecognition. IEEE,2001,vol.1,pp.I–511.