ebook img

Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes PDF

0.23 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes

Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes Andrew Duggan∗ and Alexander Barg§ Abstract—We investigate the decoding region for Algebraic binary erasure and binary symmetric channels. Another paper Soft-Decision Decoding (ASD) of Reed-Solomon codes in a [10]derivesanerrorcorrectionradiusforanarbitraryadditive discrete, memoryless, additive-noise channel. An expression is cost function associated with transitions in the channel. derived for the error correction radius within which the soft- This paper examines the performance of ASD when the decision decoder produces a list that contains the transmitted codeword. The error radius for ASD is shown to be larger noiseisadditive,i.e.,thereisaprobabilitydistributionp(a)on 7 than that of Guruswami-Sudan (GS) hard-decision decoding for q-aryerrorsthatdoesnotdependonthetransmittedsequence. 0 a subset of low-rate codes. These results are also extended to Relying on this distribution, we derive in Sect. III simple 0 multivariable interpolationin thesenseof Parvaresh and Vardy. estimates of the error radius within which the transmitted 2 AnupperboundisthenpresentedforASD’sprobabilityoferror, codeword is guaranteed to be on the list produced by ASD. where an error is defined as the event that the decoder selects n an erroneous codeword from its list. This new definition gives a With the simple expression for the error radius obtained, a J more accurate bound on the probability of error of ASD than we are also able to characterize the region where the ASD the results available in the literature. algorithm provides an improvement over the GS decoding 4 radius. ] In Section IV, we study bounds on the probability of error T I. INTRODUCTION for ASD decoding. Prior work [15] has concentrated on the I Reed-Solomon(RS)codesareusedina widevarietyofap- . list-decodingerrorevent,namelytheeventthatthetransmitted s plications currently, and the classical algorithm of Berlekamp c codeword fails to be included in the list. We point out that and Massey (BM algorithm)has been employed for decoding [ for low code rates, the list decoding error criterion does not in most cases. For an RS code of length n and dimension provide insight into the performance of the decoder because 1 k, this algorithm is guaranteed to recover the transmitted v the transmitted codeword always will be included in the list. codeword within an error radius of n−k+1 . 8 ⌊ 2 ⌋ We therefore define decoding to be successful if the ASD Guruswami and Sudan [7] presented an important new 1 algorithmselectsthetransmittedcodewordfromthedecoder’s 0 algebraicdecodingmethodforRScodesthatisabletocorrect list, and we derive an upper bound for the probability of 1 errors beyond the BM decoding radius. This method involves error based on this new definition. Finally, in Section V we 0 constructing a bivariate polynomial with zeros of multiplicity 7 brieflydiscusssoft-decisiondecodingofmultivariateRScodes based on the received symbols. The polynomial can then be 0 introduced in a recent work of Parvaresh and Vardy [14] and factored to give a list of candidate codewords; thus, it is a / extend to this case our estimate of the ASD error radius. s list decoder. A Guruswami-Sudan (GS) decoder includes the c The error bounds obtained in Sections III-V are either the : transmittedcodewordonits outputlist if theerrorsfallwithin v firstoftheirkindavailableintheliterature,or,asinthecaseof a radius of n √nk. i − theASDlist-decodingerrorbound,improvetheresultsknown X In[7],theauthorsmentionthattheirhard-decisionalgebraic previously. r decodingtechniquecanbe extendedto soft-decisiondecoding a by setting the values of the multiplicities based on channel posteriorprobabilitiesandnotreceivedsymbols.However,[7] II. DECODING OF RSCODES doesnotprovideawayofassigningthesemultiplicities,which A. Notation turns out to be a nontrivial component in the RS decoding Let q be a prime power, let F = α = 0,α ,...,α procedures. Koetter and Vardy refined the Algebraic Soft- q { 1 2 q} be the finite field of q elements, and let n = q 1 be the Decision Decoding (ASD) approach in [11] by providing an − code length. Denote dist(, ) as the Hamming distance. With algorithm that converts a matrix of posterior probabilities Π avectorv =(v ,...,v )· ·Fn weassociateaninteger-valued to a multiplicity assignment matrix . 1 n ∈ q Various papers, such as [13], [3],M[8], have been published function v(i) whose value is j if vi =αj. For a polynomial f F[x] define the evaluation mapping since[11]thatproposeusingadifferentmethodforconverting eval : f Fn given by∈(evalf) = f(α ), 2 i q. Thus, Π to . However, few papers have given insight into the → q i i ≤ ≤ M the evaluation mapping associates a q-ary n-vector to every decodingregion of ASD. One exception is [8], which charac- polynomial f F [x]. terizesthedecodingregionformediumtohighratecodesover ∈ q Definition 1: A q-ary RS code C of length n=q 1 and − Presentedinpartatthe44thAnnualAllertonConferenceonCommunica- dimension k is the set of codewords of the form tion,ControlandComputing,Monticello, IL,Septermber 27-29,2006. ∗ E-mailaddress:[email protected]. c=eval(f): f Fq[x],0 degf k 1 . § Dept. of. ECEand Institute forSystems Research, University ofMary- { ∈ ≤ ≤ − } To describe the encoding of the code C, suppose that the land, College Park, MD 20742, [email protected]. Supported in part by NSF grants CCF0515124,CCF0635271, andbyNSAgrantH98230-06-1-0044. messagetobetransmittedisu=(u1,u2,...,uk)whereui ∈ Fq, 1 i k. The codeword that corresponds to it is given Letτ =t/nbe thenormalizederrorcorrectionradiusofRS ≤ ≤ by c=eval(f), where the polynomial f has the form decoding algorithms, and let R=k/n be the rate of the code. We have f(X)=u +u X +u X2+ +u Xk−1. 1 2 3 ··· k 1 R τ = − (BM Decoding) Weassumethatthecodewordcistransmittedoveradiscrete 2 (3) memoryless channel. In the hard-decision case, the output of τ =1 √R (GS Decoding, m large). − the channel is the vector y = c + e. Let wi,j = Pr(y = Thetwoerrorradiigivenin(3)areshownasthedashedcurves αi c =αj) be the probability that the symbol αj transmitted in Figure1(a).The GS decodingradiusis always greaterthan | over the channel is received as αi. We will assume that the its BM counterpart although the difference becomes small noise is additive, i.e. there exists a probability distribution p for high rates. However, no error radius for Algebraic Soft- on Fq such that Pr(α+ eα) = p(e),α,e Fq. Note that Decision Decoding was previously known. | ∈ under these assumptions, the channel is symmetric as defined in Section 8.2 of [2]. C. ASD Algorithms In the setting of soft-decision decoding,the demodulatoris ASDisanextensionofGSdecodingbythemanipulationof assumedtoprovidethedecoderwiththeposteriorprobabilities the multiplicities. We onlymentionthe salientfeaturesof this conditionedon the received(continuous)signal.However,the algorithm,referringto[11]forthedetails.Insteadofoperating task of analyzing this setting in the context of algebraic list on a vector y, ASD takes as input a multiplicity matrix decoding so far has proved elusive. We will therefore assume M of dimensions q n constructed on the basis of the posterior that the receiver outputs in each position of the codeword, a × probability matrix Π. The soft-decision decoder constructs a hard-decisionsymbol(forinstance,themostlikelytransmitted bivariate polynomial Q(X,Y) that has zeros of multiplicity symbol) together with the q values of posterior probabilities set by . In the next step, the decoder recovers a list of π =Pr(c=α y =α ). As customary in the literature, we i,j i j M L | putativetransmittedcodewordsfromtheY-zerosofQ(X,Y). will assume that ASD takes the channel’s output to be in the In contrast to GS decoding that always constructs Q(X,Y) form of a q n matrix Π=[π ]. i,j × based on n distinct zeros, ASD can have up to qn distinct This paper concentrates on the ASD algorithm of [11] and zeros. compares its performance to the well-known hard-decision Definition 2: Define the Score of a vector v = decodingalgorithmsof Berlekamp and Massey (see, e.g., [1]) (v ,...,v ) Fn with respect to a multiplicity matrix and Guruswami and Sudan [7], [12]. 1 n ∈ q M to be n S (v)= m . M v(i),i B. Hard-Decision Decoding Methods i=1 X Under BM decoding, if the number of errors t=dist(c,y) where v(i)=ℓ if v =α . i ℓ satisfies Ifneeded,inthelaststagethedecoderchoosesthecodeword n k+1 c from the list with the largest score or the codeword with t − , (1) L ≤ 2 the largest probability Pn(cy)= π . (cid:22) (cid:23) | ci,yi 1) The Multiplicity Matrix: The matrix is determined thenthedecoderwilloutputc.Ifcondition(1)isnottrue,then Q M fromthematrixofposteriorprobabilitiesΠ.KoetterandVardy decoding is guaranteed to fail. Therefore, (1) is a necessary [11], Parvaresh and Vardy [13], and El-Khamy and McEliece condition for BM decoding success. [3] have proposed various methods for determining from GSdecodingproducesalistthatcontainsallthecodewords M Π.AsimplemethodforconvertingΠto thatwillbeusedin of a certain distance tm from the vector y and potentially thispaperiscalledtheProportionalityMMultiplicityAssignment some codewordsoutside of this Hamming ball. List decoding Strategy (PMAS) proposed by Gross et al. [5]. PMAS finds success is declared if the correct codeword is on the list. The by performing the following element-wise calculation on distance tm is determined by m which is a parameter of the MΠ for some fixed number λ: algorithm.Asmincreases,t increasestoanasymptoticlimit m given in Lemma 1. m = λπ , i=1,...,1; j =1,...,n. (4) i,j i,j ⌊ ⌋ Lemma 1: (Guruswamiand Sudan [7]) Let m . Let c →∞ Thus,the matrix is determineduniquelyfromthe received be a codeword that satisfies M vectoryandthepropertiesofthecommunicationchannel.The dist(y,c)<n √nk. (2) parameterλ Z+ isthecomplexityfactor,andits adjustment − controls dire∈ctly the balance between the performance and Then c will be included in the list output by the GS decoder the complexity of ASD. Another important measure of the with input y. complexityofthedecoderisthecostofthemultiplicitymatrix, The complexity of the algorithm often becomes a limiting defined as follows. factorbeforethe maximumpossible t is achieved.Note that Definition 3: Let the Cost of a multiplicity matrix be m M (2) is only a sufficient conditionon GS list-decodingsuccess: 1 the decoding is guaranteed to have the transmitted codeword ( )= mi,j(mi,j +1). C M 2 on the list if the error pattern satisfies (2), assuming large m. i,j X 2) Threshold Condition: GS decoding includes on its out- When the channel is noiseless, set p =0. We will assume min put list all codewords in the Hamming space within a certain throughout that the channel’s capacity is greater than zero, radius of the received vector y, but ASD has no known givingus p >p . As willbe seen later,p , p , and max min max min geometric interpretation. A sufficient condition for ASD’s γ will be the only channel statistics necessary in our analysis success is determined indirectly by the vector y and can be of ASD’s performance. statedintermsofthescoreS (c)asgiveninthenextlemma. M B. Error Radius Lemma 2: [11] Suppose ASD is used to decode a received vector y with the RS code. If Theorem 1: Suppose that an RS code with rate R=k/n is used to communicateoveran discrete,additive-noisechannel. SM(c)> 2(k 1) ( ) Supposethatacodewordcistransmittedandanalgebraicsoft- − C M decision decoder with complexity factor λ is used to decode or equivalently p the received vector y. Let t=dist(c,y). If n m > (k 1) m (m +1), Xi=1 c(i),i s − Xi,j i,j i,j t pmax− R γ+ λ1 − λ1, (5) then the transmitted codeword is on the decoder’s list. n ≤ pmqax−(cid:0) pmin (cid:1) Lemma 2 can be used to evaluate ASD’s performance in then c will be contained in the output list of the decoder. the case that c is transmitted, y is received, and is the Proof: Let c be the transmitted codeword and let y be M multiplicity matrix. the received vector. Substituting (4) in Lemma 2, we get 3) Size of the List: The decoder’s list cannot exceed n λw the Y-degree of (X,Y) since is obtaiLned by factoring i=1⌊ c(i),y(i)⌋ √k 1. Q(X,Y) for the YQ-roots. The sizeLof the list in terms of the ni=1 Pqj=1(⌊λwi,j⌋2+⌊λwi,j⌋) ≥ − Y-degree L of is estimated in the next lemma, which is q Q Instead ofPthis cPondition for successful decoding let us use a due to McEliece [12], Corollary 5.14. more stringent one obtained by removing the integer parts: Lemma 3: For k 2, the number of codewords on the list of an algebraic soft-≥decision decoder does not exceed n λw 1 i=1 c(i),y(i)− √k 1. (6) niP=1 qj(cid:0)=1((λwi,j)2+(cid:1)λwi,j) ≥ − 2 2 ( ) k+1 k+1 q L= C M + . RearranginPg andPusing the channel statistics γ yields s k 1 2k 2 − 2k 2   − (cid:18) − (cid:19) (cid:18) − (cid:19)   n The condition L is met with equality if and only if all 1 1 γ 1 1 the Y-roots hav|Le|d≤egree less than k. n wc(i),y(i) ≥sR γ+ λ − n − λn + λ. (7) i=1 (cid:18) (cid:19) X III. ASD ERROR CORRECTION PERFORMANCE Inequality (7) certainly holds true for the codeword c if In this section, we present one of our main results, an n 1 1 1 estimate of the error correction radius t of the algorithm. We w R γ+ + . (8) wouldliketostressoneessentialdifferenceoftheresultbelow n i=1 c(i),y(i) ≥s (cid:18) λ(cid:19) λ X from the other similar results in the literature. In the case of We have derived a condition based on a specific y, but we BMandGSdecodingforinstance,allofthecodewordswithin are interested in ASD’s performance for any y when c is the error radiusare includedin the list outputby the decoder. transmitted.Thus, y becomesa randomvariable.Let W be a i In contrast, we only guarantee in the case of ASD that if the random transition probability w given random y. Note c(i),y(i) transmittedcodewordisdistancetawayfromthereceivedone, thatthe W sdonotdependonc becauseoftheadditivenoise i then it will be included on decoder’s list. Other codewords, assumption. The p.m.f. of W is as follows: i even within the sphere of radius t from y, may escape being outputbythedecoder.Inotherwords,thedecodingregionsof pWi(pi):=Pr{Wi =pi}=pi, i=1,2,...,q. ASD are far from being spherical, and in fact, no geometric We can now rewrite (8) as follows: characterization of them is available. n 1 1 1 A. Setting Wi R γ+ + . n ≥s λ λ i=1 (cid:18) (cid:19) Following Koetter and Vardy in [11] and Justesen in [9], X Since the received vector y differs from c in t coordinates, weassumethateachsymbolenteringthechannelisuniformly drawnfromF .Itfollowsthatπ =w .Sincethechannelis we can bound the left-hand side of the last inequality below q i,j i,j as follows: symmetric, we know that the channel transition probabilities w are drawn from the set p ,p ,...,p . Next, we will n i,j 1 2 q { } W tp +(n t)p . introduce the three channel statistics i min max ≥ − q Xi=1 p = max p p = min p γ = p2. Indeed, if y = c then the probability w p . max 1≤i≤q i min 1i:≤pii>≤0q i Xi=1 i On the otheri h6 and,i if there is no error in cco(io),ryd(iin)at≥e i tmhienn the probability w = p ; otherwise, communication Theorem 2: Supposethata codevectorc of an RS codeof c(i),y(i) max over the channel would be impossible. Thus, c is on the soft- rateRtransmittedoverthechannelisreceivedasy =c+e.If decision decoder’s list if the values R and ∆(e) simultaneously satisfy the inequalities 1 1 1 R (1 ǫ) [ρ θ∆(a)]2 (ǫ>0) (9) (tpmin+(n t)pmax) R γ+ + . ≤ − − + n − ≥s (cid:18) λ(cid:19) λ aX∈Fq The theorem follows. ∆(e) ∆(a)[ρ θ∆(a)]+ (10) ≤ − AfirstlookattheerrorradiusinTheorem1revealsthat(5) aX∈Fq becomeslargeasp approachesp .Inotherwords,ASD min max where[x] =max(0,x), θ >0 is an arbitraryparameter,and + performs well when the channel is far from q-ary symmetric. If the channel is noiseless and λ is sufficiently large, then [ρ θ∆(a)] =1, + the bound in Theorem 1 reduces to 1−√R which is the GS aX∈Fq − normalized error bound. Let us consider a few examples. then the vector c will be placed on the ASD output list. Example 1: Consider the “typewriter channel” where Moreover,relations(9)-(10)characterizetheoptimumtradeoff w = 0.8, w = 0.2 for some j = i, and w = 0 for i,i i,j 6 i,j between the code rate and the cost of error. all the remaining pairs (i,j). Thus, p = 0.8, p = 0.2, max min We note that even though the theorem’s result is optimum and γ = 0.68. Let us set λ = 100. Figure 1(a) shows the for the “error radius” in terms of the cost function, its result normalized error correction radius compared to the GS error is inferior to the result of Theorem 1 in the following sense. bound for this example. The BM error bound is also shown For the previous theorem to give a performance curve over forreference.ASDisabletoproducealistwiththecodeword Hamming distance, the decoder is optimized over Hamming c for a greater error radius than GS decoding for many low distance, a concept that is congruentwith a hard-decision de- to medium rates. The range of rates for which ASD decoding coder,notasoft-decisiondecoder.Namely,takingthedistance correctsmore errorsthan GS decodingis characterizedbelow as the cost function, we have ∆(α )=1 δ ,i=1,...,q in this section. (the cost is 1 for all the symbolis of F− eαxi,c0ept for some Example 2: Next,letuslookattheerrorradiusofTheorem q distinguishedsymbol whose value is of no importance).Then 1whentherearetwopossible,equiprobableerrorspersymbol (9)-(10) reduce to the equation transmission, termed the “two-error channel.” In this case, p =0.8,p =0.1,γ =0.66,andwe againsetλ=100. max min t n nR n Figure 1(b) shows the ASD, GS, and BM curves for this , example. n ≤ n+1 −s(n+1)(1−ǫ) − (n+1)2 Example 3: Figure 1(c) shows the normalized error bound 1 ǫ − R 1 ǫ comparedtotheGSerrorboundforaq-arysymmetricchannel n+1 ≤ ≤ − with pmax = 0.805 and q = 16. Thus, pmin = 0.013 and which is an error radius that slightly exceeds the GS bound γ = 0.6506, and we set λ = 100. ASD still provides an (2)formostrates.Asexpected,itisinferiorto(5)forasubset improvement, but it is only for extremely low-rate codes. of low to medium rate codes. The lack of improvement for the q-ary symmetric channel is expected since all error patterns, given that there are t errors, C. Size of the List are equally probable. As thechannelapproachesq-arysymmetric,ASD isshown Proposition 1: Assumethatk 2.Givenanadditivenoise ≥ to have a progressively smaller improvementover GS decod- channel, the size of the list for an algebraic soft-decision ing for low-rate codes. The fact that the bound on the error decoder is bounded above by radiusfor ASD is below the GS radius for higher rates is due to to the approximationsused in the proofof Theorem1. The n(λ2γ+λ) k+1 2 k+1 + . (11) true ASD errorradiusis likely to be slightly greater or within s k 1 2k 2 − k 2  a neighborhoodof the GS decoding radius for all code rates.  − (cid:18) − (cid:19) (cid:18) − (cid:19)   In [10], Koetter considers a general transmission scenario Proof: Assuming PMAS (4), observe that when an RS or related code is used for communication q q over a symmetric channel, and the decoder’s performance 2 ( )=n λp 2+n λp nλ2γ+nλ. i i is measured in terms of an arbitrary additive cost function C M ⌊ ⌋ ⌊ ⌋≤ i=1 i=1 ∆ : F R+. [10] attempts at optimizing the assignment X X q → Substitutingthis upperboundin Lemma 3 givesthe result. of multiplicities for the purpose of finding the maximum attainabledecodingradius.Anerrorvectore Fn isassigned Theboundin Proposition1isbasedontheY-degreeofthe ∈ q polynomial Q(X,Y). Thus, it includes polynomials that are the cost n a good fit for the set of multiplicity points, including higher- ∆(e)= ∆(ei). degree polynomials. As a result, the bound of Proposition 1 i=1 is not tight. X The result of [10] stated for an additivechannel,is as follows Example 4: Figure2showsagraphoftheboundonthelist (the original text contains some misprints). size presented in Proposition 1 for Example 1 with n=255. 1 BM Necessary Condition 0.9 GS Sufficient Condition ASD Sufficient COndition 0.8 n) s (t/0.7 u adi0.6 R Error 0.5 d ze0.4 ali m or0.3 N 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Code Rate (R) (a) 1 1 GS Sufficient Condition GS Sufficient Condition 0.9 ASD Sufficient Condition ASD Sufficient Condition BM Necessary Condition 0.8 0.9999 Normalized Error Radius (t/n)00000.....34567 Normalized Error Radius (t/n)000...999999999678 0.2 0.9995 0.1 0 0.9994 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 4 Code Rate (R) Code Rate (R) x 10−7 (b) (c) Fig.1. Decoding radius ofASDcompared toGSandBMdecoding fora),the“typewriter channel” ofExample1,b),“two-error channel” ofExample2, andc),theq-arysymmetricchannel ofExample3. 1000 900 800 700 L) 600 e ( Siz 500 st Li 400 300 200 100 0 0 0.2 0.4 0.6 0.8 1 Code Rate (R) Fig.2. ListsizeusingASDforthe“typewriter channel”. Since the bound in (11) is a strictly decreasing function of For the q-ary symmetric channel, a case where intuition the code’sdimensionk, an upperboundon the size of the list tells us that ASD should provide no improvement over GS is obtained by taking k =2: decoding, Figure 1(c) still shows a region that where ASD’s error radius exceeds GS decoding’s error radius. However, 9 3 nλ2γ+nλ+ < n(λ2γ+λ). there is no code construction that simultaneously satisfies |L|≤$r 4 − 2% the condition λ > 1/pmin and (12) for the q-ary symmetric p channel unless we have λ . Thus, there is no achievable Thus, the number of codewords on the list is bounded above →∞ rate region where the ASD radius is larger than the GS by a slowly growing function of n. decodingradiusexceptwhenthesizeofthelistisunbounded. D. A Closer Look at the ASD Error Radius IV. ASD PROBABILITY OFERROR BOUNDS We are interested in quantifying when the error radius of This section is focused on bounding the probability of ASD exceeds the error radius of GS decoding. For simplicity error for ASD. In the list-decoding setting, the probability we only analyze the case of GS decodingwith m (note of error has generally been defined as the probability that →∞ that it will also imply that ASD is better than GS decoding the transmitted codeword is not on the decoder’s list. In the for any finite m). final stage of decoding, it may be required to select a unique Corollary 1: With λ > 1/pmin, the algebraic soft-decoding codewordcandidatefromthelistobtainedusingthemaximum radius exceeds the GS decoding radius if likelihoodcriterion. In this section, we derive a boundfor the p 1/λ 2 probabilityof error when the correctdecodingcorrespondsto R< min− . (12) the case that the transmitted codewordis on the decoder’slist γ+1/λ p +p max min Proof: Solvi(cid:16)ng the equati−on (cid:17) and it is selected as the best estimate. p The only previouswork that deals with deriving boundson pmax− R(γ+1/λ)−1/λ >1 √R theprobabilityoferrorforASDis[15].Inthatpaper,Ratnakar p p − and Koetter consider a general channel and use list-decoding pmax− min error as the error event. We refine the results of [15] in two for R and assuming λ>1/pmin yields the corollary. ways:first,weproveatighterboundonthelist-decodingerror Corollary1givesa sufficientconditionfortheASDcorrec- probabilityfor low rates, and secondly, we derive a bound on tion radius to exceed the GS decoding radius. This condition the probability of selection error that is the dominating error isnontrivialforasubsetoflowcoderatesandλlargeenough event for those rates. (see Example 1). One also notices in Example 1 that there is another non-zero subset of code rates where the transmitted codeword is always on the list, i.e. t/n 1. Corollary 2 A. General Form ≤ quantifies that region. As in the previous section, the channel is assumed to be Corollary 2: Let λ>1/pmin. If additive and memoryless. The transmitted codeword is c, the R pmin− λ1 2, dDeecfiondeert’hselifsotllioswLin,ganradntdhoemdeevcoendtesr’s cahnodsen acsodeword is ˆc. ≤ γ+ 1 A B (cid:0) λ (cid:1) :c / , :cˆ=c. then an algebraic soft-decision decoderwill always producea A ∈L B 6 list containing the transmitted codeword c. The list-decoding probability of error, which is the probabil- Proof: If the right-hand side of (5) is 1, then ASD will ity that the list produced by ASD contains the transmitted produce a list that contains c regardless of the error pattern. codeword, is given by P . The selection probability of {A} Thus, the claim reduces to solving for R the inequality error is given by P . Observe that , giving us {B} A ⊆ B P P . Each of these probabilities can be bounded pmax− R γ+ λ1 − λ1 1, ab{oAve}b≤y us{inBg}the Chernoff bound (see e.g. [4]). pmqax (cid:0) pmin (cid:1) ≥ Lemma 4: (Chernoffbound) Let w be a random variable − with moment generating function Φ (s) and let A be a real from which the corollary follows. w number. Then Itisasurprisingresultthatthereexistsnon-zerorateswhere ASD always producesa list polynomialin n that containsthe Pr w A e−sAΦ (s), s>0 (13) w { ≥ }≤ transmittedcodeword.TheintuitionisthatPr(y c)=0forall channels, but for many codewords c′ C, Pr(y| c′6)=0 due Pr w A esAΦw( s), s>0. (14) to zeros in the transition probability m∈atrix. The|zeros in the In application{s, o≤ne o}p≤timizes on−the choice of s to obtain transition probability matrix allow the soft-decision decoder thetightestboundpossible.TheChernoffboundwill allowus to discount codewords. When the code rate is low enough, to write the decoder can reduce the size of the list to be within the P e−nEA, P e−nEB. bound of Proposition 1 without actually eliminating possible {A}≤ {B}≤ codewords. In this case, the probability of list-decoding error The functions E and E are the error exponents for P A B {A} is zero. and P , respectively. {B} B. List-Decoding Probability of Error Lemma 5: The next theorem gives an upper bound on the probability (γ 1/λ)2 E >0 if R< − A that the transmitted codeword is not on the list output by the γ+1/λ Proof: We have that decoder. A similar estimate of the error exponent, in a more general context, is derived in [15]. However [15] does not q containtheanalysisoftheerrorradiusthatleadsustoconclude g(s)= pie−s“pi−√R(γ+λ1)−λ1”. that for low rates the transmitted codeword will always be on i=1 X the list. In other words, the error event for these (and some First observe that g′′(s)>0, indicating that the function g(s) other) rates will be dominated by an incorrect selection from is convex. Next observe that g(0) = 1 and that g′(0) < 0 the list obtained through ASD. if and only if R < (γ 1/λ)2/(γ +1/λ). In the case that − Theorem 3: The probability of event can be bounded g′(0) < 0, the minimum value of g(s) is achieved for some above as A s′ > 0, and since g(0) = 1, g(s′) 1. Thus, EA > 0. ≤ P e−nEA, Otherwise, g(s) 1 which gives a trivial estimate EA = 0. ≥ {A}≤ where q C. Selection Probability of Error EA = ln pie−s“pi−√R(γ+1/λ)−1/λ” When the list-decoding probability of error is zero as for − i=1 the rate region defined in Corollary 2, one does not have X except when R < (pmγin+−11//λλ)2 and λ > 1/pmin, in which case tionsiqguhatnintitfoythaecpoemrfoprrmehaenncseivoefAprSoDba.bTihlietyrefoofree,ritroisrogfivinetnerebsyt E = . A ∞ P . To make the selection error probability amenable to Proof: The rate region where E = follows directly {B} A analysis we consider the ensemble of random Generalized ∞ from Corollary 1. In order to gain insight into the event Reed-Solomon (GRS) codes. A for the remainder of the rates, consider again the threshold LetC bean RScodeandw =(w ,...,w )be a vectorof 1 n condition for ASD list-decoding success given in Lemma 2. nonzero elements of F . A GRS (w) code is obtained from q k If the condition in Lemma 2 is met, it is clear that is false. A C be multiplying every code vector c C by a diagonal However, if the condition is not met, could either be true ∈ A matrix diag(w1,...,wn). Thus, the code C gives rise to the or false. Thus, for a given , we have M ensemble of (q 1)n GRSk codes with uniform distribution − on it induced by the choice of the modifying vector w. P Pr SM(c)< 2(k 1) ( ) . We will assume transmission with a random GRS code {A}≤ − C M where w is chosen uniformly before each transmission. n p o Assumenowthatthereceivedvectoryisarandomvectorwith Theorem 4: Suppose that a random GRS code G of rate R coordinatesdistributedaccordingtothetransitionprobabilities is used to transmitthroughan additivechannel.The decoding in the channel. As above, let Wi be a random transition error probability of ASD can be bounded above as probabilityw .Withtheseassumptions,thecomponents of the matrixc(i)o,yf(im) ultiplicities as well as the score of a P e−nEB {B}≤ codeword c and the cost of become random variables. where M Then q q Pr SM(c)< 2(k 1) ( ) EB =−ln qR−1 es/λ+2 pie−s(pi−pj−λ1) . (cid:8) pPr −n WC M<n(cid:9)≤R γ+ 1 + n . Proof: Leht c ,(cid:0)...,c Xib=e1tXjhj=6=e1icodewords of th(cid:1)eicode i λ λ { 1 M} nXi=1 r (cid:16) (cid:17) o G. Define the events Ci and Di as follows: Finally, the Chernoff bound (14) can be applied to give the = (c )&(S (c ) S (c)) i i M i M result C { ∈L ≥ } = S (c ) S (c) i M i M q D { ≥ } P esn √R(γ+λ1)+λ1 pie−spi n where isthedecoder’slistofcodewords,cisthetransmitted {A}≤ L (cid:0) (cid:1)(cid:16)Xi=1 (cid:17) codeword, and M is the multiplicity matrix. Observe that where s>0. The theorem follows. In order to obtain the tightest bound, we need to maximize P =Pr c′ =c, c′ :SM(c′) SM(c) {B} {∃ 6 ∈L ≥ } EA through proper choice of s. If we define g(s) by EA = qk qk ln(g(s)), then the goal is to minimize g(s). When the =P P − Ci ≤ {Ci} bmeaxdirmawunmrevgaalurdeinogftEheApiosssnibeiglaittyivoe,f rtheleinabnleoccoomnmcluusniiocnaticoann. ni:ic[=i6=1c o i:Xic=i6=1c The following lemma shows that reliable communication is qk P . possible when the code rate is less than a rate maximum that i ≤ {D } can be well-estimated by γ. i:Xic=i6=1c 8 Selection Error List−Decoding Error 7 6 e u al 5 V nt e n 4 o p x E or 3 Err 2 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 Code Rate (R) Fig.3. Comparisonofthetwoerrorexponents forASDoverlowrates. n n For all v Fn, define the event as follows: i ∈ q Ei =qkPr λVi λWi ⌊ ⌋≥ ⌊ ⌋ : S (v ) S (c) . nXi=1 Xi=1 o i M i M n E { ≥ } n qkPr V W . i i We have ≤ − ≥−λ nXi=1(cid:0) (cid:1) o Now let us apply the Chernoff bound (13). For any s>0, qk qn P{Di}= P{Ei,vi ∈G}. n n i:Xic=i6=1c i:Xvi=i6=1c Pr (Vi−Wi)≥−λ nXi=1 o q q Further, every coordinate of c is distributed uniformly in F 1 n n and therefore, the multiplicity mv(i),i is a uniform randomq ≤esλn qespi pie−spi . variable taking values in p ,...,p . Moreover, Pr(v (cid:16)Xi=1 (cid:17) (cid:16)Xi=1 (cid:17) 1 q i G)=qk−n. { } ∈ Algebraic manipulations and the appropriate definition of EB yield the theorem. We then know that S (v) is a sum of i.i.d. random M It is of interest to find out the behavior of E compared variablesm .Asaresult,P isthesameforallv =c, A v(i),i {Ei} i 6 to E across all values of R. Let us compare the two error and the events and v C are independent, i.e. B Ei { i ∈ } exponents in an example. qn qn Example 5: Consider once again Example 1 (pmax = 0.8, P i,vi C = P i P vi C pmin = 0.2, λ = 100, q = 256, and γ = 0.68). Figure 3 {E ∈ } {E } { ∈ } shows the comparison of E to E . ne can see that reliable i=1 i=1 A B i:Xvi6=c i:Xvi6=c communication,inthelist-decodingsense,isassuredforrates less than 0.67, and reliable communication, based on the qn 1 probability of error of selection, is guaranteed for rates less = qn−kP{Ei}≤qkP{Ei|vi 6=c}. than 0.41. i=1 i:Xvi6=c V. MULTIVARIATEINTERPOLATION Define the random variable Wi as in the proof of Theorem 1 Inthissection,weestimatethedecodingradiusofASDfora and let Vi,i=1,...,qn be i.i.d. r.v.’s with newclassofcodesintroducedrecentlybyParvareshandVardy in [14]. These codes are constructed as evaluationsof M 2 P(V =p )=1/q,1 j q. ≥ i j polynomials. A multivariate interpolation decoding algorithm ≤ ≤ for the codes in [14] is shown to exceed the GS decoding Then radius for low values of the code rate R. In this section we P qkP v =c extend our analysis of ASD to multivariate interpolation. i i {B}≤ {E | 6 } 1 RS Decoding 0.9 PV Decoding 0.8 n) s (t/ 0.7 u adi 0.6 R Error 0.5 d e 0.4 z ali m or 0.3 N 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Code Rate (R) Fig.4. Trivariate decoding ofaPVcodecomparedtoASDofaRScode. A code C in the Parvaresh-Vardy (PV) family is defined where m is an element of the qM n multiplicity matrix i,j as follows. Let 1,β ,β ,...,β be a basis of F over . × F ,let a ,a ,.{..,a1 2 beaMse−to1}fpositiveintegersqMgreater M Proof: Byextendingequation(38)of[13],wecanderive q 1 2 M−1 than 1,{and let e(X) be}an irreducible polynomial over F . an upper bound for the weighted degree of the multivariate q Given a message u, the encoder constructs f(X) as the polynomial as polynomial derived from it and finds the set of polynomials wdegQ(X,Y ,...,Y ) g (X),g (X),...,g (X) by computing 1 M 1 2 M−1 { } M (16) gi(X)=(f(X))ai mode(X). (15) <M+v1 (k−1)M (mi,j +l).  u i,j l=0  A codeword c = c ,c ,...,c of a PV code that is  u XY  { 1 2 n}  t  associated with u is found through the evaluation where wdeg XiYj1...YjM =i+(k 1) M j . If the score 1 M − l=1 l S (c) exceeds the RHS of (16), then c is on the algebraic M−1 M P soft-decision decoder’s list by an argumentsimilar to the one c =f(x )+ β g (x ), i:1 i n. i i j j i ∀ ≤ ≤ employed to prove Lemma 2. j=1 X It follows that the rate of a PV code C is R = k/Mn B. Multivariate Error Decoding Radius and the minimum distance is d = n k+1. Since (15) is a non-linear operation, the code C is n−ot necessarily linear. Suppose the PV code is transmitted over a channel with transition probabilities p ,p ,...,p . The statistics p , { 1 2 qM} min p ,andγaredefinedasbeforeoverthisnewsetoftransition max A. Soft-Decision Decoding of PV Codes probabilities. An error radius is given in Theorem 6 for soft- Although Parvaresh and Vardy only considered hard- decision decoding of PV codes. decision decoding in [14], soft-decision decoding of Folded Theorem 6: Given a PV code with rate R = k/Mn is used RS Codes, a broader class of codes that contains PV codes, to communicate over an additive-noise channel. If wofas[6c]oinnscidluedreeds abycoGnudriutisownafmoir alinsdt-dReucoddrainigns[u6c]c.eRsse.mIanrkth4e t pmax− M+1 R(MM+M1M)! qi=M1 Ml=0(pi+l/λ)− λ1, next theorem we present a more stringent condition for list- n ≤ q pmaPx pmQin − (17) decoding success of a PV code transmission. Theorem 5: Let C be an [n,k] PV code over F com- thenanalgebraicsoft-decisiondecoder,withcomplexityfactor qM λ, will produce a list that contains the transmitted codeword municated over a discrete, memoryless channel with additive c. noise. Suppose that it is decoded using a multivariate version The proof is very similar to the proof of Theorem 1 and is of the ASD algorithm. A codeword c = (c ,...,c ) will be 1 n omitted. The multivariate ASD error radius is larger than the included in the list output by the algorithm if bivariate ASD error radius for low-rate codes. This claim is n shown through an example. m +M Xi=1mc(i),i > M+vuu1 (k−1)MXi,j (cid:18)mi,ij,j −1(cid:19). 0.8E,xpammipnle=6:0L.2e,tuγsr=etu0rn.6t8o,thaendtypλew=rite1r0c0h,aannndel,cpommapxa=re t trivariatesoft-decisiondecodingofPV codestobivariatesoft- [12] R. McEliece, “The Guruswami-Sudan decoding algorithm for decision decodingof RS codes(in other words, ASD). Figure Reed-Solomon Codes,” manuscript, 2003, available on-line at http://www.systems.caltech.edu/EE/Faculty/rjm/. 4 shows the error radii (5) and (17). The graph shows that [13] F.ParvareshandA.Vardy,“Multiplicityassignmentsforalgebraicsoft- trivariatedecodingprovidesanimprovementoverthebivariate decisiondecodingofReed-SolomonCodes,”Proc.2005IEEEInternat. one for rates less than 0.3. Sympos. Inform. Theory, Yokohama, Japan, June 29–July 3, 2003, p. 250. [14] F.Parvaresh and A.Vardy, “Correcting errors beyond the Guruswami- VI. CONCLUSION Sudanradiusinpolynomialtime,”Proc.2005IEEEAnnualSymposium ontheFoundations ofComputer Science (FOCS),pp.246-257, 2005. The results presented in this paper have shown that soft- [15] N. Ratnakar and R. Koetter, “Exponential error bounds for algebraic decisionalgebraiclistdecodingofRSandrelatedcodesandis soft-decision decoding of Reed-Solomon codes,” IEEE Trans. Inform. able to outperform its hard-decision counterparts for low-rate Theory,vol.51,no.11pp.3899-3917,2005. tomedium-ratecodes.Anestimateoftheerrordecodingradius derived in the paper enables ASD to be compared for the first time to other RS decoding methods. This result has also beenextendedto multivariableRScodes.A betterestimate of the error probability for list decoding under ASD is derived which shows that at lower rates, the list decoding error is not an adequate performance criterion for this algorithm. A comprehensive probability of error bound is also derived that includes the previously overlooked probability of selection error. An open question that remains unanswered is if ASD’s performance makes it a worthwhile decoder to use in Reed- Solomoncodingapplications.Forlow-ratecodingapplications with channels that are far from q-ary symmetric, ASD shows the potential to correct a greater number of errors than hard- decision decoders. However, an interesting (and important in applications) fact of ASD decoding outperforming its hard- decision counterpartsfor high-rate codesclaimed in some ex- perimentalstudies,sofarhasnotbeenconfirmedbytheoretical analysis. REFERENCES [1] R.E.Blahut,TheoryandPracticeofError-CorrectingCodes,Reading, MA:Addison-Wesley,1983. [2] T. Cover and J. Thomas, Elements of Information Theory, New York, N.Y.:JohnWileyandSons,2001. [3] M. El-Khamy and R. McEliece, “Interpolation multiplicity assign- mentalgorithms foralgebraic soft-decision decodingofReed-Solomon codes,”in:A.AshikhminandA.Barg(Eds.),AlgebraicCodingTheory andInformation Theory,pp.99-120,Providence, RI:AMS,2005. [4] R. Gallager, Information Theory and Reliable Communication, New York,N.Y.:JohnWileyandSons,1968. [5] W. Gross, F. Kschischang, R. Koetter, and G. Gulak, “Applications of algebraicsoft-decisiondecodingofReed-Solomoncodes,”IEEETrans. Commun.,vol.54,no.7,pp.1224-1234,2006. [6] V.GuruswamiandA.Rudra,“Explicitcapacity-achievinglist-decodable codes,” Electronic Colloquium on Computational Complexity, Report 05-133,2005,onlineathttp://eccc.hpi-web.de/eccc/. [7] V. Guruswami and M. Sudan, “Improved decoding of Reed-Solomon and algebraic-geometric codes,” IEEE Trans. Inform. Theory, vol. 45, no.5,pp.1757-1767,1999. [8] J. Jiang and K. Narayanan, “Performance analysis of algebraic soft decoding of Reed-Solomon codes over binary symmetric and erasure channels,”Proc.2005IEEEInternat.Sympos.Inform.Theory,Adelaide, Australia, Sept.4–9,p.1186-1190, 2005. [9] J. Justesen, “Soft-decision decoding of RS codes,” Proc. 2005 IEEE Internat.Sympos.Inform.Theory,Adelaide,Australia,Sept.4–9,2005, pp.1183-1185. [10] R. Koetter, “On optimal weight assignments for multivariate interpo- lation list-decoding,” Proc. 2006 IEEE Information Theory Workshop, PuntadelEste,Uruguay,March13-17,pp.37–41,2006. [11] R. Koetter and A. Vardy, “Algebraic soft-decision decoding of Reed- Solomoncodes,”IEEETrans.Inform.Theory,vol.49,no.11,pp.2809- 2825,2003.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.