ebook img

Augmented Lattice Reduction for MIMO decoding PDF

0.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Augmented Lattice Reduction for MIMO decoding

1 Augmented Lattice Reduction for MIMO decoding L. Luzzi G. Rekaya-Ben Othman J.-C. Belfiore 0 1 0 2 Abstract n a Latticereductionalgorithms,suchastheLLLalgorithm,havebeenproposedaspreprocessingtools J 1 in order to enhance the performance of suboptimal receivers in MIMO communications. 1 In this paper we introduce a new kind of lattice reduction-aideddecoding technique, called augmented ] T lattice reduction, which recovers the transmitted vector directly from the change of basis matrix, I . and therefore doesn’t entail the computation of the pseudo-inverse of the channel matrix or its QR s c decomposition. [ We prove that augmented lattice reduction attains the maximum receive diversity order of the channel; 1 v simulation results evidence that it significantly outperforms LLL-SIC detection without entailing any 5 2 additional complexity. A theoretical bound on the complexity is also derived. 6 1 Index Terms: lattice reduction-aided decoding, LLL algorithm, right preprocessing. . 1 0 0 1 I. INTRODUCTION : v i Multiple-inputmultiple-output(MIMO)systemscanprovidehighdataratesandreliabilityover X r fading channels. In order to achieve optimal performance, maximum likelihood decoders such as a the Sphere Decoder may be employed; however, their complexity grows prohibitively with the number of antennas and the constellation size, posing a challenge for practical implementation. On the other hand, suboptimal receivers such as zero forcing (ZF) or successive interference cancellation (SIC) do not preserve the diversity order of the system [11]. Right preprocessing Jean-Claude Belfiore, Ghaya Rekaya-Ben Othman and Laura Luzzi are with Te´le´com-ParisTech, 46 Rue Barrault, 75013 Paris, France. E-mail: {belfiore,rekaya,luzzi}@telecom−paristech.fr. Tel: +33 (0)145817705, +33 (0)145817633, +33 (0)145817636. Fax: +33 (0)145804036 (cid:13)c Thiswork hasbeen submittedto the IEEE for possiblepublication.Copyright may be transferred withoutnotice, after which this version may no longer be accessible. January5,2010 DRAFT 2 using lattice reduction has been proposed in order to enhance their performance [19, 4, 18]. In particular, the classical LLL algorithm for lattice reduction, whose average complexity is polynomial in the number of antennas1, has been proven to achieve the optimal receive diversity order in the spatial multiplexing case [17]. Very recently, it has also been shown that combined with regularization techniques such as MMSE-GDFE left preprocessing, lattice reduction-aided decoding is optimal in terms of diversity-multiplexing tradeoff [8]. However, the shift between the error probability of ML detection and LLL-ZF (respectively, LLL-SIC) detection increases greatly for a large number of antennas [13]. In this paper we present a new kind of LLL-aided decoding, called augmented lattice reduction, which doesn’t require ZF or SIC receivers and therefore doesn’t entail the computation of the pseudo-inverse of the channel matrix or its QR decomposition. In the coherent case, MIMO decoding amounts to solving an instance of the closest vector problem (CVP) in a finite subset of the lattice generated by the channel matrix2. Following an idea of Kannan [10], our strategy is to reduce the CVP to the shortest vector problem (SVP) by embedding the n-dimensional lattice generated by the channel matrix into an (n+1)-dimensional lattice. We show that for a suitable choice of the embedding, the transmitted message can be recovered directly from the coordinates of the shortest vector of the augmented lattice. In general, the LLL algorithm is not guaranteed to solve the SVP; however, it certainly finds the shortest vector in the lattice in the particular case where the minimum distance is exponentially smaller than the other successive minima. Equivalently, we can say that “the LLL algorithm is an SVP-oracle when the lattice gap is exponential in the lattice dimension”. An appropriate choice of the embedding ensures that this condition is satisfied. Thanks to this property, we can prove that our method also achieves the receive diversity of the channel. Numerical simulations evidence that augmented lattice reduction significantly outperforms LLL-SIC detection without entailing any additional complexity.A theoretical (albeit pessimistic) bound on the complexity is also derived. 1Note that the worst-case number of iterations of the LLL algorithm applied to the MIMO context is unbounded, as has been proved in [9]. However, the tail probability of the number of iterations decays exponentially, so that in many cases high complexity events can be regarded as negligible with respect to the target error rate (see [8], Theorem 3). 2Actually,LLL-ZFandLLL-SICsuboptimaldecodingcorrespondtotwoclassicaltechniquesforfindingapproximatesolutions of the CVP, due to Babai: the rounding algorithm and nearest plane algorithm respectively [1]. January5,2010 DRAFT 3 This paper is organized as follows: in Section II we introduce the system model and basic notions concerning lattice reduction, and summarize the existing lattice reduction-aided decoding schemes. In Section III we describe augmented lattice reduction decoding, and in Section IV we analyze its performance and complexity, both theoretically and through numerical simulations. II. PRELIMINARIES A. System model and notation We consider a MIMO system with M transmit and N receive antennas such that M N ≤ using spatial multiplexing. The complex received signal is given by y = H x +w , (1) c c c c where x CM, y , w CN, H M (C). The transmitted vector x belongs to a finite c c c c N M c ∈ ∈ ∈ × constellation Z[i]M; the entries of the channel matrix H are supposed to be i.i.d. complex c S ⊂ Gaussian random variables with zero mean and variance per real dimension equal to 1, and w 2 c is the Gaussian noise with i.i.d. entries of zero mean and variance N . We consider the coherent 0 case where H is known at the receiver. c Separating the real and imaginary part, the model can be rewritten as y = Hx+w, (2) in terms of the real-valued vectors (y ) (x ) y = ℜ c Rn, x = ℜ c Zm  (y ) ∈  (x ) ∈ c c ℑ ℑ     and of the equivalent real channel matrix (H ) (H ) H = ℜ c −ℑ c M (R). n m  (H ) (H )  ∈ × c c ℑ ℜ   Here n = 2N, m = 2M. The maximum likelihood decoded vector is given by xˆ = argmin H xˆ y = argmin Hxˆ y , ML c c c xˆc k − k xˆc k − k ∈S ∈S where denotes the Euclidean norm. k·k January5,2010 DRAFT 4 B. Lattice reduction An m-dimensional real lattice in Rn is the set of points (H) = Hx x Zm , L { | ∈ } where H Mn m(R). We denote by dH the minimum distance of the lattice, that is the smallest ∈ × norm of a nonzero vector in (H). More generally, for all 1 i m one can define the i-th L ≤ ≤ successive minimum of the lattice as follows: λ (H) = inf r > 0 v ,...,v linearly independent in (H) s.t. v r j i i 1 i j { | ∃ L k k ≤ ∀ ≤ } We recall that two matrices H,H generate the same lattice if and only if H = HU with U ′ ′ unimodular. Lattice reduction algorithms allow to find a new basis H for a given lattice (H) such that the ′ L basis vectors are shorter and nearly orthogonal. Orthogonality can be measured by the absolute value of the coefficients µ in the Gram-Schmidt orthogonalization of the basis, see the GSO i,j Algorithm 1. Algorithm 1: GSO (Gram-Schmidt orthogonalization) h h ∗1 ← 1 for i = 2,...,m do for j = 1,...,i 1 do − µ hhi,h∗ji i,j ← h 2 k ∗jk end h∗i ← hi − ij−=11µi,jh∗j end P We recall the following useful property of GSO: the length of the smallest of the Gram- Schmidt vectors h∗i is always less or equal to the minimum distance dH of the lattice [15]. In other words, dH ≥ a(H) + 1miinmkh∗ik (3) ≤≤ January5,2010 DRAFT 5 A basis H is said to be LLL-reduced [14] if its Gram-Schmidt coefficients µ and Gram- i,j Schmidt vectors satisfy the following properties: 1) Size reduction: 1 µ < , 1 l < k m, k,l | | 2 ≤ ≤ 2) Lovasz condition: 2 2 h +µ h δ h , 1 < k m, ∗k k,k−1 ∗k−1 ≥ ∗k−1 ≤ (cid:13) (cid:13) (cid:13) (cid:13) where δ 1,1 (a custom(cid:13)ary choice is δ =(cid:13)3). (cid:13) (cid:13) ∈ 4 4 The LLL a(cid:0)lgori(cid:1)thm is summarized in Algorithm 2. Given a full-rank matrix H Mn m(R), it ∈ × computes an LLL-reduced version H = HU, with U M (Z) unimodular, and outputs red m m ∈ × the columns h and u of H and U respectively. i i red { } { } We list here some properties of LLL-reduced bases that we will need in the sequel. First of all, the LLL algorithm finds at least one basis vector whose length is not too far from the minimum distance dH of the lattice. The following inequality holds for any m-dimensional LLL-reduced basis H [3]: m 1 h1 α 2− dH, (4) k k ≤ where α = 1 (α = 2 if δ = 3). δ 1/4 4 − Moreover, the first basis vector cannot be too big compared to the Gram-Schmidt vectors h : { ∗i} i 1 kh1k ≤ α −2 kh∗ik, ∀1 ≤ i ≤ m. In particular, if j = argmin h , 1≤i≤mk ∗ik j 1 j 1 m 1 dH ≤ kh1k ≤ α −2 h∗j = α −2 a(H) ≤ α 2− a(H). (5) (cid:13) (cid:13) (cid:13) (cid:13) C. Lattice reduction-aided decoding In this section we briefly review existing detection schemes which use the LLL algorithm to preprocess the channel matrix, in order to improve the performance of suboptimal decoders such as ZF or SIC [19, 18, 4]. Let H = HU be the output of the LLL algorithm on H. We can rewrite the received vector red as y = H U 1x+w. red − January5,2010 DRAFT 6 Algorithm 2: The LLL algorithm U = I m Compute the GSO of H k 2 ← while k m do ≤ RED(k,k-1) 2 2 if h +µ h < δ h then ∗k k,k 1 ∗k 1 ∗k 1 − − − (cid:13)swap h and h (cid:13) (cid:13) (cid:13) k k 1 (cid:13) −(cid:13) (cid:13) (cid:13) swap u and u k k 1 − update GSO k max(k 1,2) ← − end else for l = k 2,...,1 do − RED(k,l) end k k +1 ← end end Algorithm 3: Size reduction RED(k,l) if µ > 1 then | k,l| 2 h h µ h k k k,l l ← −⌊ ⌉ u u µ u k k k,l l ← −⌊ ⌉ for j = 1, ,l 1 do ··· − µ µ µ µ k,j k,j k,l l,j ← −⌊ ⌉ end µ µ µ k,l k,l k,l ← −⌊ ⌉ end January5,2010 DRAFT 7 The LLL-ZF decoder outputs • xˆLLL ZF = Q U H†redy , − S (cid:16) (cid:16)j m(cid:17)(cid:17) where H†red = (HTredHred)−1HTred is the Moore-Penrose pseudoinverse of Hred, ⌊·⌉ denotes componentwise rounding to the nearest integer and Q is a quantization function that forces S the solution to belong to the constellation . S The LLL-SIC decoder performs the QR decomposition H = QR, computes y = QTy, red • finds by recursion x defined by e y˜ m x˜ = , e m r (cid:22) mm(cid:25) y˜ m r x˜ x˜ = i − j=i+1 ij j , i = m 1,...,1, i r − $ P ii ' and finally outputs xˆ = Q (Ux). LLL SIC − S e III. AUGMENTED LATTICE REDUCTION We propose here a new decoding technique based on the LLL algorithm which, unlike the LLL-ZF and LLL-SIC decoders, does not require the inversion of the channel matrix at the last stage. Let y be the (real) received vector in the model (2). Consider the (n + 1) (m + 1) × augmented matrix h h y 1,1 1,m 1 ··· − . . . . H y  . .  H = − = (6) 0 t   h h y  1×m  n,1 ··· n,m − n    e    0 0 t   ···    where t > 0 is a parameter to be determined. The points of the augmented lattice (H) are of L the form e Hx qy ′ − , x Zm, q Z ′   ∈ ∈ qt   Hx y w In particular, the vector v = − = belongs to the augmented lattice. We will     t t show that for a suitable choice of the parameter t, and supposing that the noise is small enough,     v is the shortest vector in the lattice and the LLL algorithm finds this vector. That is, v is the ± January5,2010 DRAFT 8 first column of H = HU, the output of LLL algorithm on H. Clearly, since H is full-rank red x with probabilitye1, in thisecease the first column of the change ofebasis matrix U is e ± . Thus   1 ± we can “read” the transmitted message directly from the change of basis maetrix U.  To summarize, in order to decode we can perform the LLL algorithm on H, and given the output e H = HU, we can choose red e 1 e e e xˆ = Q (u ,...,u )T , (7) 1,1 m,1 S (cid:18)(cid:22)um+1,1 (cid:25)(cid:19) where U = (u ). e e i,j e The previous decoder can be improved by including all the columns of H in the search for red e e the vector v. Specifically, let 1 u = (u ,...,u )T, k = 1,...,m. k 1,k m,k u m+1,k If there exists some k 1,...,m such that u =1, we define e em+1,k ∈ { } | | e k = argmin Hu y , min e k k − k ks.t. uem+1,k =1 | | otherwise k = 1. Then the Augmented Lattice Reduction decoder outputs min xˆ = Q ( u ), (8) ALR S ⌊ kmin⌉ IV. PERFORMANCE A. Diversity In this paragraph we will investigatethe performance of augmented lattice reduction. We begin by proving that our method, like LLL-ZF and LLL-SIC, attains the maximum receive diversity gain of N, for an appropriate choice of the parameter t in (6). The diversity gain d of a decoding scheme is defined as follows: log(P ) e d = lim , −ρ log(ρ) →∞ where P denotes the error probability as a function of the signal to noise ratio ρ. e Proposition 1. If the augmented lattice reduction is performed using t = εa(H ), where red a(H ) is the length of the smallest vector in the Gram-Schmidt orthogonalization of H , and red red ε 1 , then it achieves the maximum receive diversity N. ≤ 2√2αm−12 January5,2010 DRAFT 9 Remark. It is essential to use a(H ) in place of a(H). In fact, for general bases H that red are not LLL-reduced, there is no lower bound of the type (5) limiting how small the smallest Gram-Schmidt vector can be. For a(H ), putting together the bounds (3) and (5), we obtain red dH αm2−1 ≤ a(Hred) ≤ dH (9) Note that the LLL reduction of H does not entail any additional complexity, since it is the same as the LLL reduction on the first m columns of H. In fact the parameter t can be chosen during the LLL reduction of H, after carrying out the LLL algorithm on the first m columns. e In order to prove thee previous Proposition, we will show that in the (m + 1)-dimensional lattice (H) there is an exponential gap between the first two successive minima. Then, using L the estimate (4) on the norm of the first vector in an LLL-reduced basis, one can conclude that e in this particular case the LLL algorithm finds the shortest vector in the lattice (H) with high L probability. This, in turn, allows to recover the closest lattice vector Hx to y in (H) supposing L e that the noise w is small enough. The following definition makes the notion of “gap” more precise: Definition. Let v be a shortest nonzero vector in the lattice (H), and let γ > 1. v is called L γ-unique if u (H), ∀ ∈ L u γ v u,v are linearly dependent. k k ≤ k k ⇒ We now prove the existence of such a gap under suitable conditions: Lemma 1. Let H be the matrix defined in (6), and let t = εa(H ), with ε 1 . red ≤ 2√2αm−12 Suppose that w = y Hx εdH. k ek k − k ≤ Hx y Then v = − is an αm2 -unique shortest vector of (H).   L t   e Remark. Observe that the hypothesis on w implies in particular that w < dH and Hx is k k k k 2 indeed the closest lattice point to y. Proof: We need to show that any vector u (H) that is not a multiple of v must have ∈ L length greater than αm2 v . k k e January5,2010 DRAFT 10 Hx qy ′ By contradiction, suppose that u = − (H) linearly independent from v such ∃   ∈ L qt that u αm2 v . Since u q t,  e k k ≤ k k k k ≥ | | u αm2 v q k k k k. | | ≤ t ≤ t On the other side, u αm2 v implies that also Hx′ qy αm2 v . Consider k k ≤ k k k − k ≤ k k Hx qHx = Hx qy + qy qHx ′ ′ k − k k − k k − k ≤ m m αm2 v α2 v + q y Hx α2 v + k k w ≤ k k | |k − k ≤ k k t k k ≤ w αm2 w 2 +t2 1+ k k (10) ≤ k k t q (cid:18) (cid:19) The bound (9) on a(H ) implies red ε dH t εdH. αm2−1 ≤ ≤ Using this inequality and the hypotheses on w and ε, we can bound the expression (10) with k k αm2 √2εdH 1+αm2−1 < 2√2εdHαm2 αm2−1 dH. ≤ (cid:16) (cid:17) Thus Hx′ qHx < dH. ButthisisacontradictionbecauseHx′ qHx (H)andisnonzero k − k − ∈ L since v and u are linearly independent. Therefore v is αm2 -unique. (Since the last coordinate of v in the basis H is 1, v cannot be a nontrivial multiple of another lattice vector.) Remark. The leower bound on t is essential to ensure that q is bounded. If q were unbounded, | | | | clearly Hx qy might be arbitrarily small and there might exist u (H) of smaller norm ′ k − k ∈ L than v. e Lemma 2. Under the hypotheses of Lemma 1, the augmented lattice reduction methods (7) and (8) correctly decode the transmitted signal x. x Proof: Let H = HU denote the output of the LLL reduction of H, and let hˆ = H ′ red 1   q be its first columen. Theepreoperty (4) of LLL reduction in dimension me+1 entails thate hˆ  1 ≤ (cid:13) (cid:13) Hx y (cid:13) (cid:13) αm2 dHe. But since v = − has been shown to be αm2 -unique in the previous Le(cid:13)mm(cid:13)a, it   t   January5,2010 DRAFT

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.