Table Of Content1
Min-Max decoding for non binary LDPC codes
Valentin Savin, CEA-LETI, MINATEC, Grenoble, France, valentin.savin@cea.fr
Abstract—Iterative decoding of non-binary LDPC codes is in the LLR domain, so that the decoding is computationally
currently performed using either the Sum-Product or the Min- stable:a “standardimplementation”whose complexityscales
Sum algorithms or slightly different versions of them. In this
as the square of the Galois field’s cardinality and a reduced
paper, several low-complexity quasi-optimal iterative algorithms
complexityimplementationcalled“selectiveimplementation”.
are proposed for decoding non-binary codes. The Min-Max
algorithm is one of them and it has the benefit of two possible ThatmakestheMin-Maxdecodingveryattractiveforpractical
LLRdomainimplementations:astandardimplementation,whose purposes.
9 complexity scales as the square of the Galois field’s cardinality The paper is organized as follows. In the next section we
0 and a reduced complexity implementation called selective imple- briefly review several realizations of the Min-Sum algorithm
0 mentation, which makes the Min-Max decoding very attractive
2 for practical purposes. for non binary LDPC codes. It is intended to keep the
paper self-contained but also to justify some of our choices
n Index Terms—LDPC codes, graph codes, iterative decoding.
regarding the new decoding algorithms introduced in section
a
J III. The implementationof the Min-Max decoder is discussed
3 I. INTRODUCTION insectionIV.SectionVpresentssimulationresultsandsection
1 VI concludes this paper.
Although already proposed by Gallager in his PhD thesis The following notations will be used throughoutthe paper.
] [1], [2] the use of non binary LDPC codes is still very
T Notations related to the Galois field:
limited today, mainly because of the decoding complexity
I • GF(q)={0,1,...,q−1},theGaloisfieldwithq elements,
. of these codes. The optimal iterative decoding is performed
s where q is a power of a prime number.Its elementswill be
c by the Sum-Product algorithm [3] at the price of an in-
called symbols, in order to be distinguished from ordinary
[ creased complexity, computation instability, and dependence
integers.
on thermal noise estimation errors. The Min-Sum algorithm
2
• a,s,x will be used to denote GF(q)-symbols.
v [3] performs a sub-optimal iterative decoding, less complex
• , , will be used to denote vectors of GF(q)-symbols.
94 tnhoainsetheestiSmuamti-oPnroedrurocrtsd.eTchoedinsugb,-aonpdtiminadleitpyenodfetnhteoMf tihne-Srmumal Faorsinxstance, a=(a1,...,aI)∈ GF(q)I, etc.
0 decodingcomesfromtheoverestimationofextrinsicmessages Notations related to LDPC codes:
1
. computed within the check-node processing. • H ∈MM,N(GF(q)), the q-ary check matrix of the code.
3 The Sum-Productalgorithmcan be efficiently implemented • C, set of codewords of the LDPC code.
80 in the probability domain using binary Fourier transforms • Cn(a), set of codewords with the nth coordinate equal to
0 [4] and its complexity is dominated by O(qlog2q) sum and a; for given 1≤n≤N and a∈ GF(q).
: product operations for each check node processing, where • = (x1,x2,...,xN) a q-ary codeword transmitted over
v x
q is the cardinality of the Galois field of the non-binary the channel.
i
X LDPC code. The Min-Sum decoding can be implemented The Tanner graph associated with an LDPC code consists
r either in the log-probability domain or in the log-likelihood of N variable nodes and M check nodes representing the N
a
ratio(LLR)domainanditscomplexityisdominatedbyO(q2) columnsandtheM linesofthematrixH.Avariablenodeand
sum operations for each check node processing. In the LLR a check node are connected by an edge if the corresponding
domain, a reduced selective implementation of the Min-Sum element of matrix H is not zero. Each edge of the graph is
decoding,calledExtendedMin-Sum,wasproposedin[5],[6]. labeled by the corresponding non zero element of H.
Here “selective” means that the check-node processing uses Notations related to the Tanner graph:
the incoming messages concerning only a part of the Galois
• H, the Tanner graph of the code.
fieldelements.NonbinaryLDPCcodeswerealsoinvestigated
• n∈{1,2,...,N} a variable node of H.
in [7], [8], [9].
• m∈{1,2,...,M} a check node of H.
In this paper we propose several new algorithms for deco-
• H(n), set ofneighborchecknodesofthe variablenoden.
dingnon-binaryLDPCcodes,oneofwhichiscalledtheMin-
• H(m),setofneighborvariablenodesofthechecknodem.
Maxalgorithm.Theyareallindependentofthermalnoiseesti- • L(m), set of local configurations verifying the check
mation errors and perform quasi-optimaldecoding – meaning
node m; i.e. the set of sequences of GF(q)-symbols
that they present a very small performance loss with respect
=(a ) , verifying the linear constraint:
n n∈H(m)
to the optimal iterative decoding (Sum-Product). We also a
proposetwo implementationsoftheMin-Maxalgorithm,both hm,nan =0
n∈XH(m)
Part of this work was performed in the frame of the ICTNEWCOM++ • L(m|a =a),setoflocalconfigurations verifyingm,
project which is a partly EU funded Network of Excellence of the 7th n
a
EuropeanFrameworkProgram. such that an =a; for given n∈H(m) and a∈ GF(q).
2
An iterative decodingalgorithmconsists of an initialization contrast with the sub-optimality property. In [3], the author
step followed by an iterative exchange of messages between concludes that the Min-Sum decoding is optimal in terms of
variable and check nodes connected in the Tanner graph. block error probability, but our explanation is quite different.
What really happens is that probabilities from the above
Notations related to an iterative decoding algorithm:
equality do not take into account the dependence relations
• γ (a), the a priori information of the variable node n
n
betweencodeword’ssymbols(orequivalentlybetweengraph’s
concerning the symbol a.
variable nodes). Taking into account only the channel obser-
• γ˜ (a), the a posterioriinformationof the variable noden
n
vation and no prior information about dependence relations
concerning the symbol a.
betweencodeword’ssymbols,we havePr( = |channel)=
• αm,n(a), the message from the check node m to the N x a
variable node n concerning the symbol a.
Pr(x =a |channel),thereforeevenasequence which
n n
• β (a),themessagefromthevariablenodentothecheck x
m,n n=1
node m concerning the symbol a. iYs not a codeword has a non zero probability. In some sense,
theMin-Sumdecodingconvergestoamaximumlikelihoodde-
II. REALIZATIONSOF THEMIN-SUMDECODING coding,distortedbydependencerelationsbetweencodeword’s
FORNON BINARY LDPC CODES symbols.
A. Min-Sum decoding
B. Equivalent iterative decoders
TheMin-Sumdecodingisgenerallyimplementedinthelog-
probability domain and it performs the following operations: The term of equivalent (iterative) decoders will be em-
ployedseveraltimesthroughthispaper.We beginthis section
Initialization
by providing its rigorous definition. Consider the a posteriori
• A priori information: γ (a) = −ln(Pr(x =a|channel))
n n informationavailableatavariablenodenafterthelthdecoding
• Variable node messages: α (a)=γ (a)
m,n n
iteration:itdefinesanorderbetweenthesymbolsoftheGalois
Iterations
field, starting with the most likely symbol and ending with
• Check node processing
the least likely one. Note that the most likely symbol may
correspond to the minimum or to the maximum of the a
βm,n(a)= min αm,n′(an′) posteriori information, depending on the decoding algorithm.
∈(aLn′()mn′|∈aHn=(ma))n′∈HX(m)\{n} Witeeractaiollnthl.is orderthe a posterioriorder of variablenoden at
• Variable node processing Wesaythattwodecodersareequivalentif,foreachvariable
node n, they both induce the same a posteriori order at any
αm,n(a)=γn(a)+ βm′,n(a) iteration l. In particular, assuming that the hard decoding
m′∈HX(n)\{m} corresponds to the most likely symbol, both decoders output
• A posteriori information the same sequence of GF(q)-symbols.
Two iterative decoders, which are equivalent to the Min-
γ˜ (a)=γ (a)+ β (a)
n n m,n Sum decoder are presented below.
m∈XH(n) 1) Min-Sum decoding: TheMin-Sum decodingperforms
0 0
For practical purposes, messages α (a) and β (a) the following operations.
m,n m,n
should be normalized in order to avoid computational insta- Initialization
bility (otherwise they could “escape” to infinity). The check • A priori information1
node processing, which dominates the decoding complexity,
γ (a)=ln(Pr(x =0|channel)/Pr(x =a|channel))
can be implementedusinga forward – backwardcomputation n n n
method.
• Variable node messages: α (a)=γ (a)
m,n n
AssumingthattheTannergraphiscyclefree,theaposteriori
Iterations
information converges after finitely many iterations to [3]:
• Check node processing: same as for Min-Sum (II-A)
N
• Variable node processing
γ˜ (a)= min γ (a )
n k k
a∈Cn(a) Xk=1 ! α′m,n(a) = γn(a)+ βm′,n(a)
Moreover, if sn = argmin(γ˜n(a)) is the most likely symbol m′∈HX(n)\{m}
a∈GF(q) αm,n(a) = α′m,n(a)−α′m,n(0)
accordingtotheaposterioriinformationcomputedabove,then
the sequence = (s ,s ,...,s ) is a codeword (this is no • A posteriori information: same as for Min-Sum (II-A)
1 2 N
longer true if sthe Tanner graph contains cycles) and it can be We note that the exchanged messages represent log-
easily verified that =argmax Pr( = |channel). likelihood ratios with respect to a fixed symbol (here, the
s a∈C x a symbol 0 ∈ GF(q) is used, but obviously other symbols may
Thus, underthe cycle free assumption the Min-Sum decoding
beconsidered).Itsmainadvantagewithrespecttotheclassical
always outputs a codeword and the above equality looks
rather like a maximum likelihood decoding, which is in 1Weassumethatallcodewords aresentequally likely.
3
Min-Sumdecodingis thatit is computationallystable, so that node n , i = 1,...,d. Since the linear constraint corre-
i
there is no need to normalize the exchanged messages. This sponding to the check node m has to be satisfied, the most
algorithm is also known as the Extended Min-Sum algorithm likely symbol for the variable node n is the unique symbol
[5], [6]. s∈GF(q), such that:
2) Min-Sum decoding: TheMin-Sum decodingperforms
⋆ ⋆ (s,s ,...,s )∈L(m)
the following operations. 1 d
Initialization Ontheotherhand,eachsymbola∈ GF(q)correspondstoaset
• A priori information of d-tuples (a1,··· ,ad), such that (a,a1,...,ad)∈L(m):
γn(a)=ln(Pr(xn =sn |channel)/Pr(xn =a|channel)) La(m)={(a1,...,ad)|(a,a1,...,ad)∈L(m)}
where sn is the most likely symbol for xn. Thus, identifying s ≡ (s1,...,sd) and a ≡ La(m), the
distance between the most likely symbol s and the symbol a
• Variable node messages: α (a)=γ (a)
m,n n
canbeevaluatedasthedistancefromthesequence(s ,...,s )
1 d
Iterations tothesetL (m).Asusual,weconsiderthatthedistancefrom
a
• Check node processing: same as for Min-Sum (II-A)
a pointto a set is equalto the minimumdistance between the
• Variable node processing given point and the points of the given set. In addition, we
havetospecifya rule thatcomputesthedistancebetweentwo
α′m,n(a) = γn(a)+ βm′,n(a) sequences (s ,...,s ) and (a ,...,a ), taking into account
1 d 1 d
m′∈HX(n)\{m} the received “marginal distances” between s and a ’s. This
α′ = min α′ (a) i i
m,n a∈GF(q) m,n can be done by using the distance associated with one of the
α (a) = α′ (a)−α′ p-norms (p≥1), or the infinity (also called maximum) norm
m,n m,n m,n
on an appropriate multidimensional vector space. Precisely :
• A posteriori information: same as for Min-Sum (II-A)
As the Min-Sum0 decoding, the Min-Sum⋆ decoding also βm,n(a)=dist((s1,...,sd), La(m))
performs in the LLR domain and is computationally stable.
= min dist((s1,...,sd), (a1,...,ad))
The main differenceis thatthe exchangedmessages represent (a1,...,ad)∈La(m)
log-likelihood ratios with respect to the most likely symbol, = min kdist(s1,a1),...,dist(sd,ad)kp
which may vary from a variable node to another, or within a (a1,...,ad)∈La(m)
fixed variable node, from an iteration to the other. = min kα (a ),...,α (a )k
1 1 d d p
Theorem 1: The Min-Sum, Min-Sum and Min-Sum de- (a1,...,ad)∈La(m)
0 ⋆
coders are equivalent. Notethatforp=1weobtaintheMin-Sum decodingfrom
⋆
It follows that the Min-Sum decoding does not present the above section. Taking into account that:
⋆
any practical interest: the equivalent Min-Sum decoding is
0
k k ≥ k k ≥ ··· ≥ k k
already computationally stable and less complex (it does not 1 2 ∞
require the minimum computation in the variable node pro-
it follows that using any of the k k or k k norms would
p ∞
cessingstep).However,theMin-Sum motivatesthedecoding
⋆ reduce the value of check node messages, which is desi-
algorithms that will be introduced in the following section.
rable, since these messages are actually overestimated by the
Thefundamentalobservationisthatmessagesconcerningmost
Min-Sum algorithm. One could think that the infinity norm
⋆
likely symbols are always equal to zero, and messages con-
would excessively reduce check node messages values, but
cerning the other symbols are positive. Therefore, exchanged
this is not true. In fact, the distance between k k and the
1
messages can be seen as metricsindicatinghow far is a given
infinity normis less than twice the distance betweenk k and
1
symbol from the most likely one. This will be developed in
the Euclidian (p = 2) norm. In practice, it turns out that the
the next section.
infinity norm induces a more accurate computation of check
node messages than k k .
1
III. MIN-NORMDECODING FORNON BINARY LDPC We focus now on the Euclidean and the infinity norms.
CODES The derived decoding algorithms are called Euclidean deco-
ding,respectivelyMin-Max decoding.Theyperformthe same
According to the discussion in the above section, we inter-
initialization step, variable nodes processing, and a posteriori
pretvariablenodemessagesas metricsindicatingthe distance
informationupdateasthe Min-Sum decoding;thereforeonly
between a given symbol and the most likely one. Consider ⋆
the check node processing step will be described below.
a check node m and a variable node n ∈ H(m). Using
the received extrinsic information, i.e. metrics received from
variable nodes n′ ∈ H(m)\{n}, the check node m has to A. Euclidean decoding
evaluate:
• Check nodes processing
• the most likely symbol for variable node n,
• how far the other symbols are from the most likely one.
letTsoisbimepthliefymnoostattliiokne,lywesysmetboHls(mco)r=res{pnon,ndi1n,g..t.o,nvda}riaabnlde βm,n(a)=∈(aLn′()mmn′|i∈anHn=(ma))sn′∈HX(m)\{n}α2m,n′(an′)
4
B. Min-Max decoding Corollary 4: Let ∆′,∆′′ ⊂ GF(q) be such that the set
• Check nodes processing {f′(a′)|a′ ∈∆′}∪{f′′(a′′)|a′′ ∈∆′′}
βm,n(a)= min max αm,n′(an′) contains the q+1 lowest values of the set
(an′)n′∈H(m) (cid:18)n′∈H(m)\{n} (cid:19)
∈L(m|an=a) {f′(a′)|a′ ∈ GF(q)}∪{f′′(a′′)|a′′ ∈ GF(q)}
Theorem 2: Over GF(2), anydecodingalgorithmusingany
one of the p-norms, with p ≥ 1, or the infinity-norm is Then for any a∈GF(q) the following equality holds:
equivalent to the Min-Sum decoding. In particular, the Min- f(a)= min (max(f′(a′),f′′(a′′)))
Sum, Min-Max, and Euclidian decodings are equivalent. a′∈∆′,a′′∈∆′′
h′a′+h′′a′′=ha
Example 5: Assume thatwe haveto computethe valuesof
IV. IMPLEMENTATIONOF THEMIN-MAX DECODER f and the base Galois field is GF(8). We proceed as follows:
In this section we give details about the implementation • Determine the 9 smallest values between the 16 values
of the check node processing within the Min-Max decoder. of the set {f′(0),f′(1),...,f′(7),f′′(0),f′′(1),...,f′′(7)}.
First, we give a standard implementation using a well-known Let us assume for instance that the 9 smallest values are
forward-backwardcomputationtechnique.Next,weshowthat {f′(1),f′(2),f′(3),f′(4),f′(6),f′(7),f′′(0),f′′(1),f′′(5)}.
the computations performed within the standard implementa-
• Set∆′ ={1,2,3,4,6,7}and∆′′ ={0,1,5},thencompute
tiondonotneedtousetheinformationconcerningallsymbols
the values of f using only the symbols in ∆′ and ∆′′.
of the Galois field and, consequently, we derive the so-called
In this way the computation of the 8 values of f takes only
selective implementation of the Min-Max decoder.
6×3=18 comparisons instead of 64.
Ingeneral,letq′ =card(∆′)andq′′ =card(∆′′),wherethe
A. Standard implementation of the Min-Max decoder
subsets∆′ and∆′′ areassumedtosatisfytheconditionsofthe
Let m be a check node and H(m) = {n1,n2,...,nd} be abovecorollary(thenq′+q′′ ≥q+1).Ifq′+q′′ =q+1,then
the set of variable nodes connectedto m in the Tanner graph.
the complexityof the “min-max”computationcanbe reduced
We recursively define forward and backward metrics, q q
by at least a factor of four, from q2 to q′q′′ ≤ ( + 1).
(Fi)i=1,d−1 and respectively (Bi)i=2,d as follows: 2 2
The main problem we encounter is that we should sort the
Forward metrics 2q values of f′ and f′′ in order to figure out which symbols
• F (a)=α (h−1 a) participateinthesets∆′ and∆′′.Inordertoavoidthesorting
1 m,n1 m,n1
• F (a)= min (max(F (a′),α (a′′))) process, we generate the subsets:
i i−1 m,ni
a′a+′,ham′′∈,nGiaF′(′q=)a ∆′k = {a′ ∈GF(q)|⌊f′(a′)⌋=k}
Backward metrics ∆′′ = {a′′ ∈GF(q)|⌊f′′(a′′)⌋=k}
k
• B (a)=α (h−1 a)
d m,nd m,nd where ⌊x⌋ denotes the integer part of x. We then define:
• B (a)= min (max(B (a′),α (a′′)))
i a′a+′,ham′′∈,nGiaF′(′q=)a i+1 m,ni ∆′ =∆′0∪···∪∆′t and ∆′′ =∆′0′∪···∪∆′t′
Then check node messages can be computed as follows: starting with t = 0 and increasing its value until we get
card(∆′)+card(∆′′)≥q+1.
• β (a)=B (a) and β (a)=F (a)
m,n1 2 m,nd d−1 The use of the sets ∆′ and ∆′′ has a double interest:
• β (a)= min (max(F (a′),B (a′′)))
m,ni a′,a′′∈GF(q) i−1 i+1 • it reducesthe numberof symbolsrequiredby (andthenthe
a′+a′′=−hm,nia complexity of) the min-max computation,
• mostofthemaximacomputationsbecomeobsolete.Indeed,
B. Selective implementation of the Min-Max decoder
max(f′(a′),f′′(a′′)) is calculated only if symbols a′ and a′′
In this section we focus on the building blocks of the belongto sets of the same index (i.e. a′ ∈∆k′ and a′′ ∈∆k′′
standardimplementation,whicharemin-maxcomputationsof with k′ = k′′). Otherwise, the maximum corresponds to the
the following type: symbol belonging to the set of maximal index.
f(a)= min (max(f′(a′),f′′(a′′))) Finally, the proposed approach is carried out in practice as
a′,a′′∈GF(q) follows:
h′a′+h′′a′′=ha
• The received a priori information is normalized such that
Thecomputationoff requiresO(q2)comparisons.Ourgoalis
its average value is equal to a predefined constant AI (for
to reduce this complexityby reducingthe number of symbols
“Average a priori Information”).This constant is chosen such
a′ anda′′ involvedinthemin-maxcomputation.Westartwith
that the integer part provides a sharp criterion to distinguish
the following proposition.
betweenthesymbolsoftheGaloisfield,meaningthatsets∆′
Proposition 3: Let ∆′,∆′′ ⊂ GF(q) be two subsets of the k
and ∆′′ only contain few elements.
Galoisfield, such thatcard(∆′)+card(∆′′)≥q+1.Thenfor k
• We use a predefined threshold COT (for “Cut Off Thresh-
any a∈GF(q), there exist a′ ∈∆′ and a′′ ∈∆′′ such that:
old”) representing the maximum value of variable node mes-
ha=h′a′+h′′a′′ sages. Thus, incomingvariablenode messagesgreater than or
5
Bit Error Rate 11110000----4321 MMiinn--MMSuaaxmxME s-suPeticanlrelno-icSdddtuaiuiavrmcndet mber of operations (in thousands) per decoded bit 112233445050505050 MMiinn--MMaaxxM ssetianlen-cSdtuaivrmde
10-5 length = 2016 bits Average nu 05
(504 GF(16)-symbols) 3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3
10-6 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 Eb/N0(dB)
Eb/N0(dB) Fig. 2. Decoding complexity of GF(16)-LDPC codes, AWGN, 16-QAM,
Fig. 1. Decoding performance of GF(16)-LDPC codes, AWGN,16-QAM, codingrate1/2,maximumnumberofdecodingiterations =200
codingrate1/2,maximumnumberofdecoding iterations =200
as metrics indicating the distance between a given symbol
and the most likely one. By using appropriate metrics, we
have derived several low-complexity quasi-optimal iterative
equal to the predefined threshold will not participate in the
algorithms for decoding non-binary codes. The Euclidian and
min-max computations for check node processing. It follows
the Min-Max algorithms are ones of them. Furthermore, we
thattherankkofsubsets∆′ and∆′′rangesfrom0toCOT−1.
k k gave a canonical selective implementation of the Min-Max
In this case, the subsets ∆′ and ∆′′ may contain all together
k k decoding, which reduces the number of operations taken to
lessthanq+1symbols,inwhichcasethemin-maxcomplexity
perform each decoding iteration, without any performance
maybesignificantlyreduced.Inpractice,thisgenerallyoccurs
degradation.The quasi-optimalperformancetogetherwith the
duringthelastiterations,whendecodingisquiteadvancedand
low complexity make the Min-Max decoding very attractive
doubts persist only about a reduced number of symbols.
for practical purposes.
Constants AI and COT have to be determined by Monte
Carlo simulation and they only depend on the cardinality of REFERENCES
the Galois field. [1] R. G. Gallager, Low Density Parity Check Codes, Ph.D. thesis, MIT,
Cambridge, Mass.,September 1960.
[2] R. G. Gallager, Low Density Parity Check Codes, M.I.T. Press, 1963,
V. SIMULATION RESULTS Monograph.
[3] N. Wiberg, Codes and decoding on general graphs, Ph.D. thesis,
We present Monte-Carlo simulation results for coding rate
LikopingUniversity, 1996, Sweden.
1/2overtheAWGN channel.Fig.1 presentsthe performance [4] L.Barnault and D. Declercq, “Fast decoding algorithm for LDPCover
of GF(16)-LDPCcodeswith the16-QAMmodulation(Galois GF(2q),” in Information Theory Workshop, 2003. Proceedings. 2003
IEEE,2003,pp.70–73.
field symbols are mapped to constellation symbols). We note
[5] D.DeclercqandM.Fossorier,“Extendedmin-sumalgorithmfordecoding
that the Euclidian and the Min-Max (standard and selective LDPC codes over GF(q),” in Information Theory, 2005. ISIT 2005.
implementations)algorithmsachievenearlythesamedecoding Proceedings. International Symposium on,2005,pp.464–468.
[6] D.DeclercqandM.Fossorier,“DecodingalgorithmsfornonbinaryLDPC
performance, and the gap to the Sum-Product decoding per-
codesoverGF(q),” Communications,IEEETransactionson,vol.55,pp.
formance is only of 0.2 dB For the selective implementation 633–643,2007.
we used constants AI =12 and COT =31. [7] MC Davey and DJC MacKay, “Low density parity check codes over
GF(q),” Information TheoryWorkshop,1998,pp.70–71,1998.
Fig. 2 presents the decoding complexity in number of
[8] X.Y. Hu and E. Eleftheriou, “Cycle Tanner-graph codes over GF(2b),”
operations per decoded bit (all decoding iterations included) Information Theory, 2003. Proceedings. IEEE International Symposium
of the Min-Sum, the standard and the selective Min-Max on.
[9] H.Wymeersch,H.Steendam,andM.Moeneclaey,“Log-domaindecoding
decoders over GF(16). The Min-Sum and the standard Min-
ofLDPCcodesoverGF(q),” Communications, 2004IEEEInternational
Max decoders perform nearly the same number of operations Conference on,vol.2,2004.
per iteration, but the second decoder has better performance
and needs a smaller number of iterations to converge.On the APPENDIX I
contrary, the standard and the selective Min-Max decoders ALTERNATIVEREALIZATIONSOF THEMIN-SUM
havethesameperformanceandtheyperformthesamenumber ALGORITHM
of decoding iterations. Simply, the selective decoder takes In this section we introduce two alternative realizations,
a smaller number of operations to perform each decoding called Min-Sum and Min-Sum , of the Min-Sum algorithm.
0 ⋆
iteration, which explains its lower complexity (by a factor of We prove that they are equivalent to the MSA algorithm of
4 with respect to the standard implementation). section and, over GF(2), the Min-Sum⋆ is equivalent to the
Min-Max algorithm. We assume that the Tanner graph H is
VI. CONCLUSIONS cycle free thought this section2.
We shownthattheextrinsicmessagesexchangedwithinthe
2Nevertheless, this assumption can be removed by replacing H with
Min-Sum decoding of non binary LDPC codes may be seen computation trees and“codewords” with“pseudo-codewords”
6
Notation. We have to prove that, after finitely many iterations, the
• n∈N ={1,2,...,N} a variable node of H. equality γ˜n(a) = f˜(a) holds. Since the Tanner graph H is
• m∈M={1,2,...,M} a check node of H. cycle free, it can be seen as a tree graph rooted at n. Let
• For each check node m ∈ M and each sequence H(n) = {m1,m2,...,md} and, for j = 1,2,...,d, let Hj
(an)n∈H(m) of GF(q)-symbols indexed by the set of bethe sub-graphofH emanatingfromthe checknodemj, as
variable nodes connected to m, we note: represented below:
mh(a ) i= h a
n n∈H(m) m,n n ?8>9n=:<;
Tnohdeesmequifenmceh(a(ann)n)n∈∈HH(m(m)i) =isn0s∈.aXHTid(hmuto)s satisfy the check zzzzzHzz1zzzzzU55U55U55U5U5H5U2UUUUUUUUUUUUHUUdUUUU
mh(a ) i=0⇔(a ) ∈L(m) m1 m2 ······ md
n n∈H(m) n n∈H(m) (cid:127)(cid:127)(cid:127)(cid:127)(cid:127);;;;; (cid:3)(cid:3)(cid:3)(cid:3)(cid:3)IIIIII (cid:2)(cid:2)(cid:2)(cid:2)(cid:2)?????
The Min-Sum decoding performs the following computa-
tions: 0 Fig.3. Graphical representation ofsub-graphs Hj
Initialization
Let also C be the linear code correspondingto the sub-graph
j
• A priori information
H ∪{n}. Since γ (0) = 0 for any k = 1,...,N, we have
j k
Pr(x =0|channel observation) d
n
γn(a)=ln Pr(x =a|channel observation) f = fj, where fj = min γk(ak). Therefore,
• Variable to chec(cid:18)k messnages initialization (cid:19) Xj=1 a|Ha(∈mCjj)=0 kX∈Hj
α (a)=γ (a)
m,n n f˜(a) = min γ (a ) −f
k k
Iterations aa=∈(aC1:,a..n.,a=Na)kX∈N
• Check to variable messages d d
= γ (a)+ min γ (a ) − f
n k k j
βm,n(a)= min αm,n′(an′) Xj=1aan∈=CjakX∈Hj Xj=1
∈(aLn′()mn′|∈aHn=(ma))n′∈HX(m)\{n}
Denoting g (a)= min γ (a ) −f we get
• Variable to check messages mj aan∈=CjakX∈Hj k k j
α′m,n(a) = γn(a)+ βm′,n(a) d
m′∈HX(n)\{m} f˜(a)=γ (a)+ g (a)
α (a) = α′ (a)−α′ (0) n mj
m,n m,n m,n
j=1
X
• A posteriori information
This formula has the same structure as the update rule of
γ˜n(a)=γn(a)+ βm,n(a) the a posterioriinformationγ˜n(a). It followsthat the equality
γ˜ (a) = f˜(a) holds provided that β (a) = g (a). Due
m∈XH(n) n mj,n mj
to the symmetry of the situation it suffices to carry out the
Theorem 6: The Min-Sum decoding converges after
0 case j = 1. For this, let H(m ) = {n,n ,n ,...,n } and,
finitely many iterations to 1 1 2 d
forj =1,2,...,d,letH bethesub-graphofH emanating
1,j 1
γ˜n(a)= min γk(ak)− min γk(ak) from the variable node nj, as represented below:
aa=∈(aC1:,a..n.,a=Na)kX∈N aa∈=C(a:1a,|.V..n(,1a)N=)0 kX∈N ?8>9n=:<;
swehpearrPeatreoVdon(f1b:)y:=Tath∪emmpo∈rsHoto(1nf)cishHed(cemkri)nvoeiddseitnfhreosamsmente om(ifnacnvlnauredirainbaglsenthn)a.otdoesf (cid:8)(cid:8)(cid:8)(cid:8)(cid:8)(cid:8)(cid:8)(cid:8)·(cid:8)UHUU1UU·UUUUUUUUUUUUUUUUUUU
theorem3.1in[3].Thecriticalpointisthatwehavetocontrol
m m
1 d
tohteheerfsfeαcmt,onf(faw()aitm)hdesr=asawgiensg. Fthmiexinαamva,nri(a0b)leγmkne(osadskae)gne afrnodmleat:ll the zzzHzz1z,z1zzzzzU66U66U6U66UH66U1U,2UUUUUUUUUUUUHUU1U,dUUU
aa=∈(aC1:,a..n.,a=Na)kX∈N ?8>9n=:1<; ?8>9n=:2<; ······ ?8>9n=:d<;
f = aa∈=C(a:m1a,|.iV.n.n(,1a)N=)0 kX∈Nγk(ak) wwwwww>>>>> (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)JJJJJJ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)HHHHHH
f˜(a) = f(a)−f Fig.4. Graphical representation ofsubgraphs H1,j
7
LetalsoC bethelinearcodecorrespondingtothesub-graph • Variable to check messages initialization
1,j
α (a)=γ (a)
H and define t (a )= min γ (a ) . Then: m,n n
1,j j j k k
aan∈jC=1,ajjk∈XH1,j Iterations
• Check to variable messages
d
min γ (a ) = min t (a )
k k j j
andaan∈=C1a kX∈H1 ! m(a11h,a..,.a,a1d,.).∈.,GaFd(iq=)d0Xj=1 βm,n(a)=∈(aLn′()mmn′|i∈anHn=(ma))n′∈HX(m)\{n}αm,n′(an′)
• Variable to check messages
d
f = min γ (a )= min γ (a )
1 a|dHa(∈mC11)=0 kX∈H1 k k Xj=1aan∈jC1=,j0k∈XH1,j k k α′mα,n′(a) = γnm(ain)+αm′′∈H(Xa(n))\{m}βm′,n(a)
m,n m,n
= tj(0) a∈GF(q)
α (a) = α′ (a)−α′
Xj=1 m,n m,n m,n
Therefore: • A posteriori information
d d γ˜ (a)=γ (a)+ β (a)
n n m,n
g (a) = min t (a ) − t (0)
m1 (a1,...,ad)∈GF(q)d j=1 j j j=1 j m∈XH(n)
m1ha,a1,...,adi=0 X X Fixe a variable node n, note H = H\({n}∪H(n)) and
⋆
d
let C be the linear code associated with H (when a node of
= min (t (a )−t (0)) ⋆ ⋆
(a1,...,ad)∈GF(q)d j=1 j j j Hisremovedwealsoremovealltheedgesthatareincidentto
m1ha,a1,...,adi=0 X the node). With this notation we have the following theorem.
Theorem 7: The Min-Sum decoding converges after
⋆
finitely many iterations to
Defining t= min γ (a ) ,
k k
f′ (a ) = am|aV∈in(Cn11),=j0k∈XHγ1,j(a )−min γ (a ) Thγ˜enp(aro)o=fcaaa∈=n(Cmab:kaei)nknd∈=NearivkX∈edNiγnks(aamk)e−maa=nn(aamek∈r)iCkno⋆∈sHt⋆hakXt∈oHf⋆tγhke(aabko)ve
nj j aan∈jC=1,ajj k∈XH1,j k k a|aV∈n(C11),=j0 k∈XH1,j k k theorem, and then will be omitted.
= t (a )−t
j j We also note that the convergence formula of the MSA
and f (a )=f′ (a )−f′ (0)=t (a )−t (0) we get decoder is (see theorem 3.1 in [3]):
nj j nj j nj j j j
d γ˜n(a)= min γk(ak)
gm1(a)=m(a11h,a..,.a,ma1d,i.).n∈.,GaFd(iq=)d0Xj=1fnj(aj) ForeachoftheMin-Sumaa0∈=a(Cna:kda)knM∈=Nian-kXS∈uNm⋆decoderstheconver-
This formula has the same structure as the update rule of genceformulaisobtainedbywithdrawingfromtherighthand
the check to variable messages β (a). It follows that the side term3 of the above formula a term that is independentof
equalityβ (a)=g (a)holdsm,p1r,novidedthatα (a)= a.Notealsothatsimilarformulasholdafteranyfixednumber
f (a).Thmis1,dnerivesfromm1 thefactthatf (a)verifimes1t,nhjesame of iterations - to see this, it suffices to cut off the tree graph
unpjdate rule as α (a) or, equivalentnljy, f′ (a) verifies the H at the appropriate depth. Therefore, we get the following:
same update rulema1s,njα′ (a). The proof cnajn be proceeded Corollary 8: The MSA, Min-Sum0 and Min-Sum⋆ de-
in same manner as formf1(,an)j, and then will be omitted. coders are equivalent.
This process is repeated recursively until the leaf variable
We prove now that the MSA and the MMA decoders are
nodes are reached. Finally, we remark that if n where a leaf
j
variable node, then f (a ) = γ (a ) = α (a ) and so equivalent over GF(2) (in a similar maner can be proved that
we are done. nj j nj j m1,nj j over GF(2) the MSA decoder is equivalent to any other Min-
kk decoder.)
p
Proof:SinceMSAandMin-Sum decoderareequivalent,
The Min-Sum decoding performs the following computa- ⋆
⋆
it suffices to prove that Min-Sum and MMA decoders are
tions: ⋆
equivalent over GF(2). In this case, the variable to check
Initialization
• A priori information 3Note also that the a priori informations γn(a) are not computed in the
samewayforthe three decoders butthey onlydiffer byaterm independent
γ (a)=ln Pr(xn =sn |channel observation) of a. Thus, for the MSA decoder γn(a) = −ln(Pr(xn = a)), for the
n (cid:18) Pr(xn =a|channel observation) (cid:19) MtheinM-Siunm-S0umde⋆coddeecrodγenr(γan)(=a)−=ln−(Plnr((xPnr(x=na=))a+))l+n(lPnr((Pxrn(x=n0=))snan)d),faolrl
where sn is the most likely symbol for xn. probabilities beingconditioned bythechannel observation.
8
messagesα (a)arenonnegativeandtheyconcernonlytwo
m,n
symbols a ∈ {0,1}. Moreover, for one of these symbols the
correspondingvariabletocheckmessageinzero(precisely,for
the symbol realizing the minimum of {α′ (a),a = 0,1}).
m,n
Fix a check node m, a variable node n and a symbol a ∈
GF(2),andlet(a¯n′)n′∈H(m)\{n},suchthatmh(a¯n′)n′,ai=0,
be the sequence realizing the minimum:
min αm,n′(an′)
∈(aLn′()mn′|∈aHn=(ma))n′∈HX(m)\{n}
Then, for the Min-Sum decoder
⋆
βm,n(a)= αm,n′(a¯n′)
n′∈HX(m)\{n}
Suppose that there are two symbols, say a¯n′ and a¯n′, of the
1 2
sequence (a¯n′)n′∈N(m)\{n}, whose corresponding variable to
check messages are not zero, i.e. αm,n′(a¯n′) > 0,i = 1,2.
i i
Then, consider a new sequence (a¯¯n′)n′∈H(m)\{n}, such that
a¯¯n′ =a¯n′ ifn′ 6=n′ianda¯¯n′ =a¯n′+1 mod 2.Thissequence
i i
still satisfies the check m, i.e. mh(a¯¯n′)n′,ai=0 and
αm,n′(a¯¯n′)< αm,n′(a¯n′)
n′∈HX(m)\{n} n′∈HX(m)\{n}
as αm,n′(a¯¯n′) = 0 < αm,n′(a¯n′), what contradicts the
i i
minimality of the sequence (a¯n′)n′∈H(m)\{n}. It follows that
there is at mot one symbol a¯n′ whose correspondingvariable
tocheckmessageαm,n′(a¯n′)6=0.Consequently,thesequence
(a¯n′)n′∈H(m)\{n} also realizes the minimum
min max αm,n′(an′)
(an′)n′∈H(m) (cid:18)n′∈H(m)\{n} (cid:19)
∈L(m|an=a)
so the update rules for check to variable messages computes
the same messages for both decoders.
APPENDIX II
PROOF OF PROPOSITION 3
Proof: Let a ∈ GF(q). Consider the functions ϕ,ψ :
GF(q) → GF(q), defined by ϕ(x) = ha+h′x, and ψ(x) =
h′′x. Since ϕ, and ψ are injective, we have:
card(ϕ(∆′))+card(ψ(∆′′))=card(∆′)+card(∆′′)≥q+1
It follows that ϕ(∆′)∩ψ(∆′′)6=∅, so there are a′ ∈∆′, and
a′′ ∈∆′′ such that
ϕ(a′)=ψ(a′′)⇔ha+h′a′ =h′′a′′ ⇔a=h−1(h′a′+h′′a′′)