ebook img

Non-Random Coding Error Exponent for Lattices PDF

0.23 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Non-Random Coding Error Exponent for Lattices

1 Non-Random Coding Error Exponent for Lattices Yuval Domb, Student Member, IEEE, Meir Feder, Fellow, IEEE Abstract An upper bound on the error probability of specific lattices, based on their distance-spectrum, is constructed. The derivation is accomplished using a simple alternative to the Minkowski-Hlawka mean-value theorem of the geometry of numbers.In many ways, the new boundgreatly resembles the Shulman-Federboundfor linear codes. Based on the new bound,an error-exponentis derivedfor specific lattice sequences(of increasingdimension)over the AWGN channel. Measuring the sequence’s gap to capacity, using the new exponent, is demonstrated. 3 1 0 I. INTRODUCTION 2 Forcontinuouschannels, InfiniteConstellation(IC) codes are thenatural coded-modulationscheme. The n a encoding operation is a simple one-to-one mapping from the information messages to the IC codewords. J The decoding operation is equivalent to finding the “closest”1 IC codeword to the corresponding channel 1 output. It has been shown that codes based on linear ICs (a.k.a. Lattices) can achieve optimal error ] performance [1], [2]. A widely accepted framework for lattice codes’ error analysis is commonly referred T to as Poltyrev’s setting [3]. In Poltyrev’s setting the code’s shaping region, defined as the finite subset of I . the otherwise infinite set of lattice points, is ignored, and the lattice structure is analyzed for its coding s c (soft packing) properties only. Consequently, the usual rate variable R is infinite and replaced by the [ Normalized Log Density (NLD), δ. The lattice analogous to Gallager’s random-coding error-exponent 5 [4], over a random linear codes ensemble, is Poltyrev’s error-exponent over a set of lattices of constant v density. 2 2 Both Gallager and Poltyrev’s error-exponents are asymptotic upper bounds for the exponential behavior 0 of the average error probability over their respective ensembles. Since it is an average property, examining 6 . a specific code from the ensemble using these exponents is not possible. Various upper error bounding 1 0 techniques and bounds for specific codes and code families have been constructed for linear codes over 2 discrete channels [5], while only a few have been devised for specific lattices over particular continuous 1 channels [6]. : v The Shulman-Feder Bound (SFB) [7] is a simple, yet useful upper bound for the error probability of a i X specific linear code over a discrete-memoryless channel. In its exponential form it states that the average r error probability of a q-ary code C is upper bounded by a Pe(C) ≤ e−nEr(R+longα) (1) N (τ) enR α = max C (2) τ Nr(τ)enR −1 where n, R, and E (R) are the dimension, rate, and random-coding exponent respectively, and N (τ) r C and N (τ) are the number of codewords of type τ for C and for an average random code, respectively r (i.e. distance-spectrum). The SFB and its extensions have lead to significant results in coding theory. Amongst those is the error analysis for Maximum Likelihood (ML) decoding of LDPC codes [8]. The main motivation of this paper is to find the SFB analogue for lattices. As such it should be an expression that upper bounds the error probability of a specific lattice, depend on the lattice’s distance spectrum, and resemble the lattice random-coding bound. The main result of this paper is a simple upper bounding technique for the error probability of a specific lattice code or code family, as a function of its distance-spectrum. The bound is constructed by replacing A subset of this work was presented at the IEEE International Symposium on Information Theory (ISIT) 2012. 1Closest in the sense of the appropriate distance measure for that channel. 2 the well-known Minkowski-Hlawka theorem [9] with a non-random alternative. An interesting outcome of the main result is an error-exponent for specific lattice sequences. A secondary result of this paper is a tight distance-spectrum based, upper bound for the error probability of a specific lattice of finite dimension. The paper is organized as follows: Sections II and III present the derivation of the Minkowski-Hlawka non-random alternatives, section IV outlines a well-known general ML decoding upper bound, section V appliesthenewtechniquestothegeneral boundofsectionIV,and sectionVIpresentsanewerror-exponent for specific lattice sequences over the AWGN channel. II. DETERMINISTIC MINKOWSKI-HLAWKA-SIEGEL Recall that a lattice Λ is a discrete n-dimensional subgroup of the Euclidean space Rn that is an Abelian group under addition. A generating matrix G of Λ is an n×n matrix with real valued coefficients constructed by concatenation of a properly chosen set of n linearly independent vectors of Λ. The generating matrix G defines the lattice Λ by Λ = {λ : λ = Gu,u ∈ Zn}. A fundamental parallelepiped of Λ, associated with G is the set of all points p = n u g where 0 ≤ u < 1 and {g }n are the basis i=1 i i i i i=1 vectors of G. The lattice determinant, defined as detΛ ≡ |degG|, is also the volume of the fundamental P parallelepiped. Denote by β and δ the density and NLD of Λ respectively; thus β = enδ = (detΛ)−1. The lattice-dual of the random linear codes ensemble, in finite-alphabet codes, is a set of lattices originally defined by Siegel [10], [11], for use in proving what he called the Mean-Value Theorem (MVT). This theorem, often referred to as the Minkowski-Hlawka-Siegel (MHS) theorem, is a central constituent in upper error bounds on lattices. The theorem states that for any dimension n ≥ 2, and any bounded Riemann-integrable function g(λ) there exists a lattice Λ of density β for which x g(λ) ≤ g dx = β g(x)dx. (3) eδ Rn Rn λ∈XΛ\{0} Z (cid:16) (cid:17) Z Siegel proved the theorem by averaging over a fundamental set2 of all n-dimensional lattices of unit density. The disadvantages of Siegel’s theorem are similar to the disadvantages of the random-coding theorem. Since the theorem is an average property of the ensemble, it can be argued that there exists at least a single specific lattice, from the ensemble, that obeys it; though finding that lattice cannot be aided by the theorem. Neither can the theorem aid in analysis of any specific lattice. Alternatives to (3), constructed for specific lattices, based on their distance-spectrum, are introduced later in this section. We begin with a few definitions, before stating our central lemma. The lattice Λ always refers to 0 a specific known n-dimensional lattice of density β, rather than Λ which refers to some unknown, yet existing n-dimensional lattice. The lattice Λ is the normalized version of Λ (i.e. det(Λ ) = 1). Define 0 0 0 the distance series of Λ as the ordered series of its unique norms {λ }∞ , such that λ is its minimal 0 j j=0 1 norm and λ , 0. {λ }∞ is defined for Λe respectively. The normalized continuous deistance-spectrum 0 j j=1 0 of Λ is defined as 0 ∞ e e N(x) = N δ(x−λ ) (4) j j j=1 X e where {N }∞ is the ordinary distance-spectrum of Λ , and δ(·) is the Dirac delta function. Let Γ denote j j=1 0 the group3 of all orthogonal n × n matrices with determinant +1 and let µ(γ) denote its normalized measure so that dµ(γ) = 1. The notation γΛ is used to describe the lattice generated by γG, where Γ 0 G is a generating matrix of the lattice Λ . 0 R 2Let Υ denote the multiplicative group of all non-singular n×n matrices with determinant 1 and let Φ denote the subgroup of integral matrices in Υ. Siegel’s fundamental set is defined as the set of lattices whose generating matrices form a fundamental domain of Υ with regards to right multiplication by Φ (see section 19.3 of [9]). 3This group, consisting only of rotation matrices, is usually called the special orthogonal group. 3 Our central lemma essentially expresses Siegel’s mean-value theorem for a degenerate ensemble con- sisting of a specific known lattice Λ and all its possible rotations around the origin. 0 Lemma 1. Let Λ be a specific n-dimensional lattice with NLD δ, and g(λ) be a Riemann-integrable 0 function, then there exists an orthogonal rotation γ such that x g(λ) ≤ N(kxk)g dx (5) eδ Rn λ∈γXΛ0\{0} Z (cid:16) (cid:17) with N(x) : x > 0 N(x) , nVnxn−1 (6) 0 : x ≤ 0 (cid:26) where V is the volume of an n-dimensional unit sphere, and k·k denotes the Euclidean norm. n Proof: Let Θ denote the subspace of all points θ in the n-dimensional space with kθk = 1, so that Θ is the surface of the unit sphere. Let µ(θ) denote the ordinary solid-angle measure on this surface, normalized so that dµ(θ) = 1. We continue with the following set of equalities Θ R g(λ)dµ(γ) ZΓλ∈γXΛ0\{0} = g(γλ)dµ(γ) λ∈XΛ0\{0}ZΓ γλ = g dµ(γ) eδ e e ZΓ ! λ∈XΛ0\{0} e λ θ = g dµ(θ) (cid:13) e(cid:13)δ  λe∈XΛe0\{0}ZΘ (cid:13)(cid:13)e(cid:13)(cid:13) ∞  Rθ  = N(R) g dµ(θ)dR eδ Z0+ ZΘ (cid:18) (cid:19) ∞ N(R) Rθ = ·g dµ(θ)dV Rn nV Rn−1 eδ n Z0+ ZΘ n (cid:18) (cid:19) N(kxk) x = ·g dx ZRn\{0} nVnkxkn−1 (cid:16)eδ(cid:17) x = N(kxk)g dx. (7) eδ Rn Z (cid:16) (cid:17) where the third equality follows from the definition of Γ and Θ and the measures µ(γ) and µ(θ), the fourth equality is due to the circular-symmetry of the integrand, and the sixth equality is a transformation from generalized spherical polar coordinates to the cartesian system (see Lemma 2 of [11]). Finally there exists at least one rotation γ ∈ Γ for which the sum over γΛ is upper bounded by the 0 average. The corollary presented below is a restricted version of lemma 1 constrained to the case where the function g(λ) is circularly-symmetric, (i.e. g(λ) = g(kλk)). To simplify the presentation, it is implicitly assumed that g(λ) is circularly-symmetric for the remainder of this paper. It should be noted that all results presented hereafter apply also to a non-symmetric g(λ) with an appropriately selected rotation γ of Λ . 0 Corollary 1. Let Λ be a specific n-dimensional lattice with NLD δ, and g(λ) be a circularly-symmetric 0 4 Riemann-integrable function, then x g(λ) = N(kxk)g dx (8) eδ λ∈XΛ0\{0} ZRn (cid:16) (cid:17) with N(x) as defined in lemma 1. Proof: When g(λ) is circularly-symmetric, g(λ)dµ(γ) = g(γλ)dµ(γ) ZΓλ∈γXΛ0\{0} λ∈XΛ0\{0}ZΓ = g(λ) dµ(γ) λ∈XΛ0\{0} ZΓ = g(λ) (9) λ∈XΛ0\{0} The right-hand side of (8) can be trivially upper bounded by replacing N(x) with a suitably chosen function α(x), so that x x N(kxk)g dx ≤ α(kxk)g dx. (10) eδ eδ Rn Rn Z (cid:16) (cid:17) Z (cid:16) (cid:17) Provided the substitution, it is possible to define the following upper bounds: Theorem 1 (DeterministicMinkowski-Hlawka-Siegel(DMHS)). Let Λ bea specificn-dimensionallattice 0 of density β, g(λ) be a bounded Riemann-integrable circularly-symmetric function, and α(x) be defined such that (10) is satisfied, then g(λ) ≤ β max α(x) g(x)dx (11) λ∈XΛ0\{0} (cid:20)x≤eδλmax (cid:21)ZRn where λ is the maximal kxk for which g(x) 6= 0. max Proof: Substitute a specific α(x) for N(x) in (8), and upper bound by taking the maximum value of α(x) over the integrated region, outside the integral. Theorem 2 (extended DMHS (eDMHS)). Let Λ be a specific n-dimensional lattice with density β, 0 g(λ) be a bounded Riemann-integrable circularly-symmetric function, α(x) be defined such that (10) is satisfied, and M be an positive integer then M g(λ) ≤ β min maxα(x) g(x)dx (12) λ∈XΛ0/{0} {Rj}Mj=1 Xj=1 Rj ZRj ! with R = {x : x ≥ 0,eδR < x ≤ eδR } (13) j j−1 j R = {x : x ∈ Rn,R < kxk ≤ R } (14) j j−1 j where {R }M is an ordered set of real numbers with R , 0 and R = λ , where λ is the maximal j j=1 0 M max max kxk for which g(x) 6= 0. Proof: Substitute a specific α(x) for N(x) in (8) and break up the integral over a non-overlapping set of spherical shells whose union equals Rn. Upper bound each shell integral by taking the maximum value of α(x) over it, outside the integral. Finally, the set of shells, or rather shell-defining radii is optimized 5 such that the bound is tightest. The eDMHS may be viewed as a generalization of DMHS since for M = 1 it defaults to it. In addition, when M → ∞ the eDMHS tends to the original integral. Clearly, both bounds are sensitive to choice of α(x), though one should note, that for the same choice of α(x) the second bound is always tighter. Thebounds shown aboveare general up to choice of α(x), and clarify the motivationfor the substitution in(10).Carefulconstructionofα(x)alongwiththeselectedboundcanprovideatradeoffbetweentightness and complexity. The next section presents a few simple methods for construction of the function α(x), and their consequences. III. CONSTRUCTION OF α(x) Maximization of the right-hand-side of (8), by taking out the maximum value of N(x) outside the integral, is not well-defined. This is since N(x) is an impulse train. The motivation of this section is to find a replacement for N(x) that enables using this maximization technique whilst retaining a meaningful bound. Let’s assume that g(λ) is monotonically non-increasing4 in kλk, and define N′(x) to be a smoothed version of the normalized continuous distance-spectrum, selected such that it satisfies r r N(R)dR ≤ N′(R)dR ∀r ∈ (0,eδλ ]. (15) max Z0+ Z0+ Given the above, α(x) can be defined by expanding (10) as follows: x eδλmax R N(kxk)g dx = N(R)g dR eδ eδ ZRn (cid:16) (cid:17) Z0+ (cid:18) (cid:19) eδλmax R ≤ N′(R)g dR eδ Z0+ (cid:18) (cid:19) x = α(kxk)g dx (16) eδ Rn Z (cid:16) (cid:17) with N′(x) : x > 0 α(x) , nVnxn−1 (17) 0 : x ≤ 0 (cid:26) where the equalities follow methods used in (7) together with g(λ)’s circular-symmetry, and the inequality follows from g(λ) being monotonically non-increasing together with N′(x) obeying (15). Define {α (x)} to be the set of all functions α(x) such that (15) is satisfied. Specifically let’s define i i∈I the following two functions: • The first function αrng(x) is defined to be piecewise constant over shells defined by consecutive radii from the normalized distance series {λ }∞ , (i.e. a shell is defined as S = {x : λ < x ≤ λ }). j j=0 j j−1 j The constant value for shell j is selected such that the spectral mass N is spread evenly over the j shell. Formally, this can be expressedeas e e N [αrng(x)] = j . (18) x∈Sj V λn −λn n j j−1 (cid:16) (cid:17) • The second function αopt(x) is selected to be αj(x) esuch ethat the bound (11) is tightest; thus by definition j = argmin max α (x). (19) i i∈I x≤eδλmax 4Thisrestrictionholds for manycontinuous noisechannels, such asAWGN.For other channels, itispossibletodefine similarheuristics. 6 As it turns out αopt(x) is also piecewise constant over shells defined by consecutive radii from the normalized distance series, and can be obtained as the solution to a linear program presented in appendix A. One subtlety, that can go by unnoticed about αopt(x), is its dependence on λ . Careful examination max reveals that αopt(x) is constructed from those spectral elements whose corresponding distances are less than or equal to λ . This is of relevance when optimization of λ is dependent on αopt(x), max max as is the case in some of the bounds discussed hereafter. This technical subtlety can be overcome by construction of a suboptimal version of αopt(x) consisting of more spectral elements than necessary. In many cases both the suboptimal and optimal versions coincide. In the remainder of this paper, this technicality is ignored and left for the reader’s consideration. Figure 1 illustrates N(x), αrng(x), and αopt(x) for the rectangular lattice Z2. 3 N αrng 2.5 αopt 2 α(x) 1.5 1 0.5 0 0 0.5 1 1.5 2 2.5 3 3.5 4 x Fig. 1. N(x), αrng(x), and αopt(x) for the rectangular latticeZ2. IV. A GENERAL ML DECODING UPPER BOUND Many tight ML upper bounds originate from a general bounding technique, developed by Gallager [12]. Gallager’s technique has been utilized extensively in literature [5], [13], [14]. Similar forms of the general bound, displayed hereafter, have been previously presented in literature [6], [15]. Playing a central role in our analysis, we present it as a theorem. Before proceeding with the theorem, let us define an Additive Circularly-Symmetric Noise (ACSN) channel as an additive continuous noise channel, whose noise is isotropically distributed and is a non- increasing function of its norm. Theorem 3 (General ML Upper Bound). Let Λ be an n-dimensional lattice, and fkzk(ρ) the pdf of an ACSN channel’s noise vector’s norm, then the error probability of an ML decoder is upper bounded by r Pe(Λ) ≤ min fkzk(ρ)P2(λ,ρ)dρ (20) r  λ∈Λ\{0}Z0 X  ∞ + fkzk(ρ)dρ Zr (cid:19) where the pairwise error probability conditioned on kzk = ρ is defined as P (λ,ρ) = Pr(λ ∈ Ball(z,kzk)|kzk = ρ) (21) 2 7 where Ball(z,kzk) is an n-dimensional ball of radius kzk centered around z. Proof: See appendix B. We call the first term of (20) the Union Bound Term (UBT) and the second term the Sphere Bound Term (SBT) for obvious reasons. In general (21) is difficult to quantify. One method to overcome this, which is limited for analysis of specific lattices, is by averaging over the MHS ensemble. New methods for upper bounding (20) for specific lattices are presented in the next section. V. APPLICATIONS OF THE GENERAL ML BOUND In this section, the UBT of the ML decoding upper bound (20) is further bounded using different bound- ing methods. The resulting applications vary in purpose, simplicity, and exhibit different performance. We present the applications and discuss their differences. A. MHS Application of the MHS theorem from (3), leads to the random-coding error bound on lattices. Since it is based on the MHS ensemble average, the random-coding bound proves the existence of a lattice bounded by it, but does not aid in finding such lattice; neither does it provide tools for examining specific lattices. Theorem 4 (MHS Bound, Theorem 5 of [15]). Let fkzk(ρ) be the pdf of an ACSN channel’s noise vector’s norm, then there exists an n-dimensional lattice Λ of density β for which the error probability of an ML decoder is upper bounded by r∗ ∞ Pe(Λ) ≤ βVn fkzk(ρ)ρndρ+ fkzk(ρ)dρ (22) Z0 Zr∗ with r∗ = (βV )−1/n. (23) n Proof: Set g(λ) as r g(λ) = fkzk(ρ)P2(λ,ρ)dρ, (24) Z0 noting that it is a bounded function of λ and continue to bound the UBT from (20) using (3). The remainder of the proof is presented in appendix C. B. DMHS Application of the DMHS theorem (11) using α(x) = αopt(x) provides a tool for examining specific lattices. The resulting bound is essentially identical to the MHS bound, excluding a scalar multiplier of the UBT. It is noteworthy that this is the best α(x)-based bound of this form, since αopt(x) is optimized with regards to DMHS. Theorem 5 (DMHS Bound). Let a specific n-dimensional lattice Λ of density β be transmitted over an 0 ACSN channel with fkzk(ρ) the pdf of its noise vector’s norm, then the error probability of an ML decoder is upper bounded by r ∞ Pe(Λ0) ≤ min αβVn fkzk(ρ)ρndρ+ fkzk(ρ)dρ (25) r (cid:18) Z0 Zr (cid:19) with α = max αopt(x) (26) x≤eδ·2r 8 where αopt(x) is as defined by (19). Proof: Set g(λ) as in (24), noting that it is bounded by λ = 2r. The remainder is identical to the max proof of theorem 4 replacing β with αβ. Optimization of r can be performed in the following manner. Since αopt(x) is a monotonically non- increasing function of x, optimization of r is possible using an iterative numerical algorithm. In the first iteration, set r = (βV )−1/n and calculate α according to (26). In each additional iteration, set n r = (αβV )−1/n and recalculate α. The algorithm is terminated on the first iteration when α is unchanged. n C. eDMHS Rather than maximizing the UBT using a single scalar factor (as was done in the DMHS), the eDMHS splits up the UBT integral to several regions with boundaries defined by {λ }∞ . Maximization of each j j=0 resulting region by its own scalar, results in much tighter, yet more complex bound. This is typically preferred for error bounding in finite dimension lattices. For asymptotical analysis, the eDMHS would typically require an increasing (with the dimension) number of spectral elements, while the DMHS would still only require one, making it the favorable. We choose α(x) = αopt(x), rather than optimizing α(x) for this case. The reason is that although this bound is tighter for the finite dimension case, it is considerably more complex than a competing bound (for the finite dimension case) presented in the next subsection. The main motivation for showing this bound is as a generalization of DMHS. Theorem 6 (eDMHS Bound). Let a specific n-dimensional lattice Λ of density β be transmitted over 0 an ACSN channel with fkzk(ρ) the pdf of its noise vector’s norm, then the error probability of an ML decoder is upper bounded by M r Pe(Λ0) ≤ min β αj fkzk(ρ)hj(ρ)dρ r j=1 Zλj/2 X ∞ + fkzk(ρ)dρ (27) Zr (cid:19) with α = maxαopt(x) = αopt(eδλ ) (28) j j Sj h (ρ) = σ{x ∈ Ball(z,kzk)|kzk = ρ}dx (29) j S Z j S = {x : x ≥ 0,eδλ < x ≤ eδλ } (30) j j−1 j S = {x : x ∈ Rn,λ < kxk ≤ λ } (31) j j−1 j where αopt(x) is as defined in (19), σ{x ∈ Ball(z,kzk)} is the characteristic function of Ball(z,kzk), {λ }∞ is the previously defined distance series of Λ , and M is the maximal index j such that λ ≤ 2r. j j=0 0 j Proof: We set g(λ) as in (24) noting that it is bounded by λ = 2r. We continue by remembering max that αopt(x) is piecewise constant in the shells S , and therefore (12) collapses to j M g(λ) ≤ β α g(x)dx. (32) j S λ∈XΛ0/{0} Xj=0 Z j For the remainder we continue in a similar manner to the proof of theorem 4 by upper bounding the UBT from (20) using (32). See appendix D. A geometrical interpretation of h (ρ) is presented in figure 2. j 9 λ j λ j−1 ρ Fig. 2. A two dimensional example of hj(ρ). hj(ρ) is the volume of the overlapping region of the two bodies. D. Sphere Upper Bound (SUB) Another bound involving several elements of the spectrum can be constructed by directly bounding the conditional pairwise error probability P (x,ρ). This bound is considerably more complex than the 2 DMHS, but is potentially much tighter for the finite dimension case. When the channel is restricted to AWGN, the resulting bound is similar in performance to the SUB of [6], hence the name. The bounding technique for P (x,ρ), presented in the following lemma, is based on [16]. 2 Lemma 2 (AppendixDof[16]). Let xbeavector pointinn-spacewithnormkxk ≤ 2ρ,z anisotropically distributed n-dimensional random vector, and ρ a real number then n−1 kxk 2 2 Pr(x ∈ Ball(z,kzk)|kzk = ρ) ≤ 1− (33) 2ρ ! (cid:18) (cid:19) Proof: See appendix E. The above lemma leads directly to the following theorem. Theorem 7 (SUB). Let a specific n-dimensional lattice Λ of density β be transmitted over an ACSN 0 channel with fkzk(ρ) the pdf of its noise vector’s norm, then the error probability of an ML decoder is upper bounded by n−1 M r λ 2 2 j Pe(Λ0) ≤ min Nj fkzk(ρ) 1− dρ r j=1 Zλj/2 (cid:18)2ρ(cid:19) ! X  ∞ + fkzk(ρ)dρ (34) Zr (cid:19) where {λ }∞ and {N }∞ are the previously defined distance series and spectrum of Λ respectively, j j=1 j j=1 0 and M is the maximal index j such that λ ≤ 2r. j Proof: Bound the UBT of (20) directly, using (33). See appendix F. Optimization of r can be performed in the following manner. We begin by analyzing the function n−1 2 2 f(ρ) = 1− λj . It is simple to verify that this function is positive and strictly increasing in the 2ρ (cid:18) (cid:19) domain {ρ : ρ(cid:16)≥(cid:17)λ /2}. Since the UBT of (34) is a positive sum of such functions, it is positive and j monotonically nondecreasing in r. Since additionally the SBT of (34) is always positive, it suffices to search for the optimal r between λ /2 and an r , where r is defined such that the UBT is greater 1 max max than or equal to 1. 10 We continue by calculating M that corresponds to r . By definition, each selection of M corre- max max sponds to a domain {r : λ ≤ 2r < λ }. Instead of searching for the optimal r over the whole domain M M+1 {r : λ /2 ≤ r < r }, we search over all sub-domains corresponding to 1 ≤ M ≤ M . When M is 1 max max constant, independent of r, the optimal r is found by equating the differential of (34) to 0, in the domain {r : λ ≤ 2r < λ }. Equating the differential to 0 results in the following condition on r: M M+1 n−1 M λ 2 2 N 1− j −1 = 0. (35) j 2r ! j=1 (cid:18) (cid:19) X The function on the left of the above condition is monotonically nondecreasing (as shown previously), so an optimal r exists in {r : λ ≤ 2r < λ } iff its values on the domain edges are of opposite signs. If M M+1 no optimum exists in the domain, it could exist on one of the domain edges. The optimization algorithm proceeds as follows: set M = 1 and examine the differential at the domain edges λ /2 and λ /2. If the edges are of opposite signs, find the exact r that zeros the differential M M+1 in the domain and store it as rM. Otherwise set rM = ∅, advance M by 1 and repeat the process. The zc zc process is terminated at M = M . When done, evaluate (34) at r = r1,...,rM,λ /2,...,λ /2 and max zc zc 1 Mmax select the r that minimizes. We conclude this section with an example presented in figure 3, which illustrates the effectiveness of the new bounds for finite dimension. The error probability of the Leech5 lattice Λ is upper bounded 24 by DMHS (25) and SUB (34). The ordinary Union Bound (UB), the MHS bound for dimension 24 (22), and the Sphere Lower Bound (SLB) of [17] are added for reference. The spectral data is taken from [18]. The bounds are calculated for an AWGN channel with noise variance σ2. 0 NRMH SUB UB −0.5 MHS SLB −1 e) P g(10−1.5 o l −2 −2.5 −3 18 20 22 24 26 28 30 32 34 VNR=e−2δ/σ2 Fig. 3. A comparison of DMHS and SUB for the Leech lattice. The UB, MHS and SLB are added for reference. The graph shows the error probability as a function of the Volume-to-Noise Ratio (VNR) for rates δ∗ <δ<δcr. VI. ERROR EXPONENTS FOR THE AWGN CHANNEL The previous section presented three new spectrum based upper bounds for specific lattices. As stated there, only the DMHS bound seems suitable for asymptotical analysis. This section uses the DMHS bound to construct an error-exponent for specific lattices over the AWGN channel. Although this case is of prominent importance, it is important to keep in mind that it is only one extension of the DMHS bound. In general, DMHS bound extensions are applicable wherever MHS bound extensions are. 5The Leech lattice is the densest lattice packing in 24 dimensions.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.