Replica Symmetric Bound for Restricted Isometry Constant Ayaka Sakata Yoshiyuki Kabashima The Institute of Statistical Mathematics Dep. of Computational Intelligence & Systems Science Tachikawa, Japan Tokyo Institute of Technology Theoretical Biology Laboratory Yokohama, Japan RIKEN, Wako, Japan Email: [email protected] Email: [email protected] Abstract—We develop a method for evaluating restricted is offered by the condition that the corresponding entropy 5 isometry constants (RICs). This evaluation is reduced to the vanishes. Furthermore, in order to demonstrate our method’s identification of the zero-points of entropy density which is 1 utility, we apply our scheme to Gaussian random matrices, defined for submatrices that are composed of columns selected 0 usingthereplicamethod,andcomparetheobtainedresultwith from a given measurement matrix. Using the replica method 2 developed in statistical mechanics, we assess RICs for Gaussian that of earlier studies. n random matrices under the replica symmetric (RS) assumption. Ourtheoreticalevaluationisalsonumericallyassessedusing u In order to numerically validate the adequacy of our analysis, the exchange Monte Carlo (EMC) sampling [9], which is J weemploytheexchangeMonteCarlo(EMC)method,whichhas expected to achieve much higher numerical accuracy than 3 beenempiricallydemonstratedtoachievemuchhighernumerical those of naive Monte Carlo schemes. The EMC method en- 2 accuracy than naive Monte Carlo methods. The EMC method ableseffectivesampling,avoidingentrapmentatlocalminima, suggeststhatourtheoreticalestimationofanRICcorrespondsto ] anupperboundthatistighterthaninprecedingstudies.Physical which limits the effectivenessof naive Monte Carlo sampling T consideration indicates that our assessment of the RIC could be to capture the true behavior [10]. Numerical results suggest I improved bytakingintoaccountthereplicasymmetry breaking. that our scheme currently provides the tightest RIC upper . s bound,whichcouldbefurthertightenedbytakingintoaccount c [ I. INTRODUCTION the replica symmetry breaking (RSB). 3 II. RESTRICTED ISOMETRY CONSTANT Thesignalprocessingparadigmofcompressedsensing(CS) v In thefollowing,we assume that AAA RM N is normalized enables a substantially more effective sampling than that re- × 1 so as to (typically) satisfy (AAATAAA) =∀1∈for all i 1, ,N . 8 quiredbytheconventionalsamplingtheorem[1].CSisapplied ii ∈{ ··· } 2 to problemsin variousfields, in which the acquisition of data Definition 1 (Restricted isometry constants). A matrix AAA 6 is quite costly, such as astronomicaland medicalimaging[2], RM N satisfiestherestrictedisometryproperty(RIP)withRI∈C 0 × [3]. The CS performance is mathematically analyzed using 0<d min d max if 1. the problem settings of a randomized linear observation [4], S ≤ S 0 [5]. Here, AAA RM×N is the given observation matrix, and CS (1−d Smin)||xxx||2F ≤||AAAxxx||2F ≤(1+d Smax)||xxx||2F (1) 5 endeavors to∈reconstruct the S-sparse signal xxx RN that has holdsforanyS-sparsevectorxxx RN,inwhichSisthenumber :1 S(<N) nonzero components from observation∈yyy=AAAxxx. of non-zero components. ∈ v A widely used strategy for the reconstruction of this signal i The original work presented by Cande`s et al. [4] addresses X is the ℓ minimization, which corresponds to the relaxed 1 symmetric RIC d =max[d min,d max]. An RIC indicates how ar tphreobℓlemanodfℓℓ0 mmiinniimmiizzaattiioonn.sAtratkeegyieqsuiasnttihtye uressetdrictotedaniasloymze- close the space, Swhich is sSpannSed by the S-columns of AAA, 0 1 is to an orthonormal system. If an RIC is small, the linear etry constant (RIC) [6]. Literally evaluating an RIC requires transformation performed using AAA is nearly an orthogonal the computation of maximum and minimum eigenvalues of transformation. N!/(S!(N S)!) submatrices that are generated by extracting − The symmetric RIC provides sufficient conditions for the S-columnsfromAAA,whichiscomputationallyinfeasible.Inthe reconstruction of S-sparse vector xxx in underdetermined linear case of Gaussian random matrices of AAA, the upper bound for system yyy=AAAxxx using ℓ and ℓ minimization [6]. theRICisestimatedusingthelargedeviationpropertywithout 0 1 direct computation of the eigenvalues [6], [7], [8]. Theorem 1. Let AAA RM N and xxx RN with M <N, and × This paper proposes a theoretical scheme for the direct consider the linear e∈quation yyy=AAAxxx.∈If d <1, a unique S- 2S estimation of the RICs. In order to do this, we evaluate the sparsesolutionexistsandisthesparsestsolutiontoℓ problem 0 entropy density of the submatrices that provide a given value min xxx , subject to yyy=AAAxxx. (2) 0 of the maximum/minimum eigenvalues. An RIC of matrix AAA xxx || || Also, if d <√2 1, the S-sparse solution to ℓ problem 2S 1 − min xxx , subject to yyy=AAAxxx (3) Proof: Applying identity d ( uuu 2 N) = b /(4p ) xxx || ||1 dh exp bh /2( uuu 2 N) give|s| u||sF − × is uniquely identified as the sparsest solution and equals the − || ||F− R (cid:0) (cid:1) ℓ0 problem’s solution. Z(AAA;b )=(2p )N2−1 dh exp b Nh 1 (cid:229) ln(h +l ) , It should be noted that d Smin and d Smax do not increase or 2b N2−1 Z h n 2 −2b i i oi decrease at the same rate, and asymmetric RICs improve the condition of ℓ1 reconstruction [11]. in which {l i} is the ith eigenvector of AAATAAA. As b →+¥ , the integral can be evaluated using the saddle point method, Theorem 2. Consider the same problem settings as in Theo- which is dominated by h = l (AAATAAA)+(Nb ) 1+o(b 1), rem 1. If (4√2−3)d 2mSin+d 2mSax<4(√2−1), then the unique where o(b −1) represents th−e mcionntribution from− neglig−ible S-sparse solution is the sparsest solution to the ℓ1 problem terms compared with b 1. This yields eq. (7), and eq. (8) − and equals the solution to the ℓ0 problem [11]. is similarly obtained by applying the saddle point method for RICevaluationisalsoafundamentallinearalgebraproblem b ¥ . ✷ →− [7],[8]becauseRICsclearlyrelatetotheeigenvaluesofGram Theorem 3 holds for all submatrices AAAT. For mathematical matrices. Let T V = 1, ,N , T =S be the position of convenience, we introduce variables ccc 0,1 N and define ⊆ { ··· } | | ∈{ } the nonzero elements of S-sparse vector xxx. The product AAAxxx b equalsAAATxxxT, where AAAT isthe submatrixthatconsistsof i∈T Zc(ccc,AAA;b )=ZduuuP(uuu|ccc)expn− 2||AAA(ccc◦uuu)||2Fo columns of AAA and where xxx = x i T . For any realization T { i| ∈ } d ( ccc uuu 2 N), (10) of T, the following holds. × || ◦ ||F− l (AAATAAA ) xxx 2 AAA xxx 2 l (AAATAAA ) xxx 2 where denotes the component-wise product, and P(uuuccc)(cid:181) min T T || T||F ≤|| T T||F ≤ max T T || T||F exp ◦(cid:229) N (1 c)u2/2 is introduced in order to a|void Here, l min(BBB) and l max(BBB) denote the minimum and maxi- the (cid:16)di−vergie=n1ce −causiedi by(cid:17)integrating u when c =0. Let us mumeigenvaluesofBBB,respectively,andsuperscriptTdenotes i i define ccc(T) 0,1 N to be (ccc(T)) = 1 for i T and to the matrix transpose. Therefore, the following expression of be (ccc(T)) =∈0{othe}rwise. The two ifunctions Z(∈AAA ;b ) and the RIC is equivalent to eq. (1): i T Z (ccc(T),AAA;b ) have a one-to-onecorrespondence:Z(AAA ;b )= c T d min=1 l (AAA;S), d max=l (AAA;S) 1, (4) Z (ccc(T),AAA;b ). We write l (ccc,AAA) and l (ccc,AAA), which S − m∗in S m∗ax − arce obtained by substitutinmgaxZ (ccc,AAA;b ) inmtoin eq. (7) and c in which eq. (8), respectively. Because l (AAATAAA ) = l (ccc(T),AAA) max T T max l m∗in(AAA;S)= min l min(AAATTAAAT), (5) andl min(AAATTAAAT)=l min(ccc(T),AAA)naturallyhold,eqs.(5-6)can T:T V,T=S be respectively rewritten as ⊆ | | l (AAA;S)= max l (AAATAAA ). (6) m∗ax max T T Literal evaluation of eq. T(4:T)⊆rVe,|qTu|=irSes the calculations of the l m∗in(AAA;S)=cmcc∈icccnSl min(ccc,AAA), (11) mGraaxmimmumatraincedsmiAAAnTimAAAum,ewigheincvhaliusescoomfpthuetatNio!n/a(lSly!(Ndi−ffiSc)u!l)t l m∗ax(AAA;S)=mccc∈acccxSl max(ccc,AAA), (12) { T T} where ccc denotes the set of configurations of ccc that satisfy whenN andSarelarge.FortypicalGaussianrandommatrices S (cid:229) c =S. AAA, the RIC’s upper bound is estimated using large deviation i i We hereafter focus on the situation in which both M and S properties of the maximum and minimum eigenvalues of the are proportional to N as M=Na and S=Nr , respectively, Wishart matrix [6], [7], [8]. where a ,r O(1). Let us define the energy densities of ccc to III. PROBLEM SETUPAND FORMALISM beL +(cccAAA)∼=l min(ccc,AAA)/2andL (cccAAA)=l max(ccc,AAA)/2.Based We estimate RICs in a different manner, and the following on this,|we introduce a free ent−ropy| density as f (m AAA;r )= tThheeoorermemis3f.uLnedtaAmAAenRtaMl toNo.uTrheanpptrhoeamchi.nimumandmaximum nNo−te1slotghhe(cid:229) scccige−nNomfL msgn.(mE)(qcccs|AAA.)(d7(cid:0)-(cid:229)8)Ni=o1fcfie−rNitsr (cid:1)ailt,ewrnhaetirveesegxn|p(rme)ssdioen- × eigenvalues of AAATAAA∈are given by m N 2 f (m AAA;r )= limN−1log (cid:229) Zcb (ccc,AAA;b )d (cid:229) ci Nr . l min(AAATAAA)=−b l→im+¥ Nb logZ(AAA;b ), (7) | bm →+¥ h ccc (cid:16)i=1 − (cid:17)i(13) 2 l (AAATAAA)= lim logZ(AAA;b ), (8) max −b ¥ Nb Inaddition,werepresentthenumberofcccthatcorrespondto →− L (cccAAA)=l /2 and satisfy (cid:229) c =Nr as exp(Nw (l AAA;r )) respectively, where Z(AAA;b ) is defined using uuu∈RN: us±ing|entropy densities w (l iAAAi;r ), which are na±tura|lly as- Z(AAA;b )=Z duuueb2||AAAuuu||2Fd (||uuu||2F−N). (9) smuimcreodsctoopicbestactoensvoefxcccfiusnr±cetpiolan|csedofwlith. tShueminmteagtiroanloofvlerovtheer w (l) w (l) 0.35 + - (a) m = x m (cid:1)+0 m (cid:1)–0 m = –x 0.3 0.25 0.2 m = m m = m + max min w 0.15 l l 0 0 0.1 l at m = x l atm = –x min max 0.05 l atm +0 l atm –0 min (cid:1) max (cid:1) lminatm = mmax lmaxatm = mmin 0 0 0.05 0.1 0.15l 0.2 0.25 0.3 0.35 (Possible minimum eigenvalue) (Possible maximum eigenvalue) 0.35 (b) Fig.1. Schematicpictureofentropycurveandrelationship toparameter m . 0.3 0.25 the possible value of L (cccAAA): 0.2 ± | - w 0.15 1 f (m |AAA;r )= NloghZdl exp{−Nml +Nw sgn(m )(l |AAA;r )}i 0.1 →mlax{−ml +w sgn(m )(l |AAA;r )}, (14) 0.05 inwhichthesaddlepointmethodisemployed.Themaximizer 0 ofl ,whichcorrespondstothetypicalenergyvalueofcccthatis 2 2.2 2.4 2.6 2.8l 3 3.2 3.4 3.6 3.8 sampled following the weight e−Nm L sgn(m)(ccc|AAA)d ((cid:229) Ni=1ci−Nr ), must satisfy Fig.2. Entropycurvefora =0.5andr =0.1with(a)m >0and(b)m <0. CirclesdenoteEMCmethodresults.Verticallinesrepresent(a)minimumand m +¶w sgn(m )(l |AAA;r ) =0. (15) (b)maximumeigenvalues ofMPdistribution. − ¶l Eq. (14) implies that f (m AAA;r ) is obtained using the Legen- IV. RS ANALYSIS FORGAUSSIAN RANDOMMATRIX dre transformation of w (|l AAA;r ), and the inverse Legendre This section applies the methodology introduced in the transformation converts f±(m |AAA;S) to w (l AAA;r ) as previous section to the case in which components of AAA are | ± | independently generated using a Gaussian distribution with w sgn(m )(l |AAA;r )=f (m |AAA;r )−m ¶f (m¶m |AAA;r ), (16) wmea(nl AA0A;arn)dravnadrioamncley (flNuact)u−a1te. Idnepthenisdicnagseo,nf (AAAm.|AAAH;orw)evanerd, fo±r all| e >0, the probability that deviation from the typical from the convexity assumption of w (l AAA;r ). A similar for- values,f (m ;r ) [f (m AAA;r )] andw (l ;r ) [w (l AAA;r )] , malism has been introduced for inve±stiga|ting the geometrical is larger than e ≡tends t|o vaniAsh as N± ¥ . H≡ere,±[] |denoteAs A structure of weight space in learning of multilayer neural the average of AAA. Therefore,typical p→ropertiescan·be charac- networks [12]. terized by evaluating the typical values, f (m ;r ) and w (l ), The relationships among m , l , and w are illustrated in using the replica method with the identity [13], [14]: ± Fig.1.Entropydensitiesw andw areco±nvexincreasingand + ¶ decreasingfunctionsofl ,respecti−vely.Accordingtoeq.(15), [logf(AAA)] =lim log[fn(AAA)] (18) the value of l at m representsthe pointwhere the gradientof A n 0¶ n A → w equals m . By definition, negative entropy values are not where f(AAA)isanarbitraryfunction.Whenbothnandm=m /b saal±ltoiswfieeds,baontdhwL ±((lccc|AAAAAA;)r=)<l /02imanpdlie(cid:229)stcha=tnNorccc.sTimheurletafonreeo,utshlye ianreepqo.s(i1t3iv)eaisntfe(gAAAe)rsl,eraedgsarudsintoge(cid:229) xcccpZrecms(sccc,[AAAfn;(bAAA))d]((cid:229) aNis=1acsiu−mNmra)- i i A l that produce±sw | (l AAA;r )=0 is the possible minimum or tion/integrationwithrespecttonandnmreplicavariables ccca ∗ ∗ m±aximum eigenvalu±e. H±e|nce, eqs. (11-12) give us and ccca uuuas (a 1,2,...,n ,s 1,2,...,m ), which{can} be ev{alu◦ated }by th∈e{saddle poi}nt m∈et{hod for N } ¥ . l m∗in(AAA;r )=l +∗, l m∗ax(AAA;r )=l ∗, (17) Underthe replicasymmetric(RS) assumption→,in whichthe − dominant saddle point is assumed to be invariant against any which are the typical values for m = m and m = m , permutationofthereplicaindicesaands withineachoftheir max min respectively (Fig. 1). sets 1,2,...,n and 1,2,...,m , respectively, the resulting { } { } functional form of N 1log[fn(AAA)] becomes extendable for 7 − A non-integer n and m. Therefore, we insert the expression into 6 eq.(13)employingtheformulaofeq.(18),whichfinallyyields 5 a a f (m ;r )= log a +c +m (1 q) + log(a +c ) 4 −2 { − } 2 S am q +Qˆ qˆ1 1+c +qˆ0q+Kr d 3 −2 a +c +m (1 q) 2 − 2 (cid:16) m (cid:17) 2 { − } 2 (√qˆ qˆ y+√qˆ z)2 +ZDzlogn1+e−KZDyexp(cid:16) 1− 20Qˆ 0 (cid:17)o, (19) 1 BRahS- TBaonunnedr Numerical Lower Bound 0 hweqahn.ed(r1es9i{)dqei,s,cas,nhQdoˆ,wRqˆn0D,iqznˆ1=,AKRp}−p+¥ea¥ nre√ddi2dzxpeAteex.rpmE(i−nntezro2d/pt2yo)de.xeTnthrseeimtideiezsreiwvtahte(iolrni;groh)ft 0 0.1 0.2 0.3 0.4 r 0/.a5 0.6 0.7 0.8 0.9 1 are derived by applying the inverse Legendre transfor±mation Fig.3. ComparisonofsymmetricRICfora =0.5.Numericallowerbound to f (m ;r ). isestimatedforN=1000andM=500. 0.014 RS l V. RESULTS 0 0.012 BT l In Fig. 2, entropy densities w with a =0.5 and r =0.1 0.01 0 areshownfor(a)m <0and(b)m ±>0.Resultsoftheexchange Monte Carlo (EMC) sampling are represented by circles, and 0.008 the EMC procedure is summarized in Appendix B. a/ r 0.006 The values of l when m +0 and m 0, which are denoted using dashed lines→, coincide with→the−respective 0.004 RS l1 minimumandmaximumoftheMarchenko-Pastur(MP)distri- BT l 0.002 1 bution’s support for the M S Gaussian random matrix [15]. As the limit of m 0 co×rresponds to unbiased generation 0 | |→ of M S Gaussian random matrices, the coincidence theo- 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 × a retically supports the adequacy of our analysis. The slight discrepancy between the theoretical and EMC results in the entropy’s tails could be due to the insufficiency of the RS Fig. 4. ℓ0 and ℓ1 limits given by RS RIC. Black lines represent Bah and Tanner’sresults, denotedbyBT. assumption. The convexity of our entropy suggests that the RS assumption exactly creates the entropycurve or extendsit outward[16].ThisisconsistentwiththeEMCmethod’sresult, VI. SUMMARY AND CONCLUSION which indicates that the exact entropy curve is inward when We proposed a theoretical scheme for the evaluation of compared to that produced by the RS assumption. Therefore, restricted isometry constants. The problem was converted to the estimated zero-points, l and l , that are provided m∗ax m∗in the assessment of entropy density, and the possible maximum using the RS assumption, are meaningful upper and lower and minimum eigenvalues, which produce the RIC, are the bounds, respectively, of the true values. We call them RS entropy’s zero-points. Given a Gaussian random matrix, we bounds. computed the entropy density using the replica method under Fig. 3 compares our RS upper bound, Bah and Tanner’s the replica symmetric ansatz and estimated the value of the upper bound [6], and the RIC numerically obtained lower RIC. Physically, it has meaning as a bound and is tighter bound [17]. In this example, the symmetric RIC is d max. Our S than existing bounds. Numerical experiments using the EMC analysis lowers the upper bound of the RIC, especially for sampling support our analysis. a large r /a region. Over the entire parameter region, our A more accurate evaluation of the RIC is possible if the estimates are consistent with the numerically obtained lower RSB is taken into account. Our scheme is applicable to more bound. general matrices than Gaussian random matrices as well. Fig. 4 shows the parameter region that mathematically supports ℓ and ℓ reconstruction according to Theorems 1 APPENDIX A 0 1 and 2. The region determined by the Bah and Tanner RIC is RS CALCULATIONOF FREE ENTROPY DENSITY indicatedusingblacklines.TheRS boundofthe RICextends Identities the region in which correct reconstruction is guaranteed, and further extension may be provided by taking the RSB into 1= dq(a,s )(b,t)d q(a,s )(b,t) 1 (cid:229)N cacbuas ubt , (20) account. Z (cid:16) −Ni=1 i i i i (cid:17) for all combinations of replica indices (a,s ) APPENDIX B and (b,t ) (a,b = 1,2,...,n;s ,t = 1,2,...,m), MONTECARLO SAMPLING FORRIC ESTIMATION are employed in the saddle point assessment of We employ the exchange Monte Carlo (EMC) sampling f b (n,m;r ) ≡ N−1log[((cid:229) cccZm(ccc,AAA;b )d ((cid:229) Ni=1ci − Nr ))n]A. [9] in order to numerically compute the free entropy density We assume that the dominant saddle point is of the replica f (m AAA;r ) and obtain the entropy density avoiding the trap of symmetric form as | metastablestates.IntheEMCapproach,weprepareksystems, 1 for a=b, s =t whichhavethesameconfigurationofAAA,andassignconfigura- q(a,s )(b,t)=q1 for a=b, s 6=t (21) tsiiognnscccio∈f cccmSiandarepasreatmteotebremtihteoseaamche.sEysatcehmsite=p1o,f··th·e,kE. TMhCe q for a=b. { } 0 6 processupdatesccciwithineachsystem,andattemptsexchanges ThismeansthatwhenAAAisaGaussianrandommatrixofmean between configurationsccc and ccc . The probability of transi- i i+1 0 and variance (Na ) 1, tion from ccc to ccc is given by w(ccc,ccc)=min exp(m ND ),1 , [sams sbnt ]A=a −1d mn (−d abd st +q1d ab(1−d st )+q0(1−d ab)) iwtyheroef D ain=i exL cshg′ina(nmgi)e(cccib|AAAe)tw−eeLnsgins(ym′isi)t(ecccm′i|sAAA).ccc{i Tahnedipccrcoi+bi1abi}ils- nthheoagltdlisgs,aimwbslheecrdaeunesambsteo≡tehx(cid:229) epirAceemsnsicetraidaulaiaslsi.mHsiamtisgthh=eeroaroer−md1e,/r2w(c√ohri1rcehlaiqtni1odwnicamssaate+res gtienirvewnsuhfibficyhciewnD teix,ic+u(1ccpcid=,accctie+Ls1,s)gn(t=hmie)(mccciein|nAA{Atei)rxe−p(LNk-s(gsmny(ism−tie+m1m)(i+ccc1ii+s)D1|iAeAA,ix+)p.1e),cA1te}fd-, vGR√aeaqrpui1als−abscilaeiqnns0g,vramtah[·n+e]Adso√awmdqidt0vhlzeamra)piv,aoebirnwlaetghseeevoreafwluiztwaehtramiosore,nmspoveeafmafcn,etrsaatonnaddnthueeznxsmiep−traeGvrsaaesruiioasinns.iac.oidenf.. (cid:213)tioskioT=b1hcteeoaxnidnpvee{end−rsgibNetyymoaitLfpopssgtlayne(tieqmnisug)(iWlctccih±bi|erAAA(ilmu)}m|uAAA.l;tirhd)iis=sttor(cid:229)igbcrccua∈tmccciSodnm(eltPh/to2otd−({[Lcc1ci8±}]|(AAAccuc|s)AAAin)(cid:181)g) f b (m,r )≡limn→0¶¶nf b (n,m;r ), as histgramsof L ± obtainedby EMC sampling. Finally,the free m(Q˜+q˜ q ) m2(q˜ q q˜ q ) entropy density is calculated: f b (m;r )= 1 1 − 1 1− 0 0 + DzlogX b (z) m b (1 q2 ) 1 Zm (q q ) f (m |AAA;r )=Z dl Wsgn(m )(l |AAA;r )exp(−Nml /2), (29) +a log 1+ − 1 log 1+ 1− 0 h− 2 (cid:16) a (cid:17)−2 (cid:16) a +b (1 q1)(cid:17) and the entropy density is derived by applying the inverse m q0 +Kr , − (22) Legendre transformation to eq. (29). −2{a +b (1−q1)+m (q1−q0)}i ACKNOWLEDGMENT where The authors would like to thank Tomoyuki Obuchi for his e K m(√q˜ q˜ y+√q˜ z)2 X b (z)=1+(Q˜+−q˜1)m/2ZDyexp(cid:16) 21−(Q˜+0 q˜1) 0 (cid:17) hsueplppfourltecdobmymthenetRsIaKnEdNdiSsPcuDsRsiofnelsl.owTshhisipwaonrdkbwyaKsApKaErtNiaHllyI and Q˜, K, q˜ and q˜ are conjugate variables for the integral No. 26880028 (AS), and KAKENHI No. 25120013 (YK). 1 0 representations of delta functions in eq. (10), eq. (13) and REFERENCES eq. (20), respectively.Eq. (22) yields the free entropy density as f (m ;r )=limb ¥ f b (m /b ,r ),in whichthe variablesscale [1] H.Nyquist,Trans.AIEE47,617(1928). [2] D. D. Lustig and J. Pauly, Magnet. Reson. Med. 58 (6), 1182–1195 so that Qˆ m(Q˜+→q˜1), qˆ1 m2q˜1, qˆ0 m2q˜0, and c b (1 (2007). ≡ ≡ ≡ ≡ − q ) become O(1). This gives the expression of eq. (19). [3] J.Bobinetal.,IEEEJ.Sel.Top.SignalProcess.2(5),718–726(2008). 1 Thevariables c ,q ,Qˆ,qˆ ,qˆ ,K aredeterminedbyextrem- [4] E.Cande`s etal.,IEEETrans.Inform.Theory52(2),489–509(2006). { 0 1 0 } [5] D.Donoho,IEEETrans.Inform.Theory52(4),1289–1306(2006). ization conditions of the free entropy density eq. (19), [6] E.Cande`s andT.Tao,IEEETrans.Inform.Theory51(12),4203–4215 mr (2005). c = (23) [7] J.D.Blanchard etal.,SIAMRev.53(1),105–125(2011). Qˆ [8] B.Bah,andJ.Tanner,SIAMJ.MatrixAnal.&Appl.31(5),2882–2898 X (z) 1 √qˆ z 2 (2010). 0 q= Dz − (24) [9] K.Hukushima,andK.Nemoto,J.Phys.Soc.Jpn,65,1604–1608(1996). Z n X (z) Qˆ Dˆo [10] D.DonohoandY.Tsaig,SignalProcess.86(3),549–581(2006). − X (z) 1 Dˆ qˆ z2 [11] S.FoucartandM.-J.Lai,Appl.Comput.Harmon.Anal.26(3),395–407 1= Dz − + 0 (25) (2009). Z X (z) (cid:16)Qˆ(Qˆ Dˆ) (Qˆ Dˆ)2(cid:17) [12] R.MonassonandD.O’Kane,Europhys.Lett.27(2),85–90(1994). X (z) 1 − − [13] M. Me´zard et al., Spin Glass Theory and Beyond, (World Sci. Pub., r = Dz − (26) 1987). Z X (z) [14] H. Nishimori, Statistical Physics of Spin Glasses and Information am 2(1 q) Processing:AnIntroduction, (OxfordUniv.Pr.,2001). Dˆ = − (27) [15] V.A.MarchenkoandL.A.Pastur,Mat.Sb.(N.S.)72(114),507–536 (a +c ) a +c +m (1 q) (1967). am{ 2q − } [16] T.Obuchietal.,J.Phys.A:Math.Theor.43,485004(28pp)(2010). qˆ = (28) [17] C.Dossaletal.,LinearAlgebraAppl.432,1663–1679(2010). 0 a +c +m (1 q) 2 [18] A.M.FerrenbergandR.H.Swendsen,Phys.Rev.Lett.64,1195–1198 { − } (1989). where q=q0, Dˆ =qˆ1 qˆ0, and X (z)=limb ¥ X b (z). − →