ebook img

A Rate-Splitting Approach to Fading Channels with Imperfect Channel-State Information PDF

0.68 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A Rate-Splitting Approach to Fading Channels with Imperfect Channel-State Information

1 A Rate-Splitting Approach to Fading Channels with Imperfect Channel-State Information Adriano Pastore, Student Member, IEEE, Tobias Koch, Member, IEEE, and Javier Rodr´ıguez Fonollosa, Senior Member, IEEE 3 1 0 2 n a Abstract J 5 As shown by Me´dard (“The effect upon channel capacity in wireless communications of perfect and imperfect 2 knowledge of the channel,” IEEE Trans. Inf. Theory, May 2000), the capacity of fading channels with imperfect ] channel-state information (CSI) can be lower-bounded by assuming a Gaussian channel input X with power P and T I byupper-boundingtheconditionalentropyh(X|Y,Hˆ),conditionedonthechanneloutputY andtheCSIHˆ,bythe . s entropyofaGaussianrandomvariablewithvarianceequaltothelinearminimummean-squareerrorinestimatingX c [ from(Y,Hˆ).Wedemonstratethat,usingarate-splittingapproach,thislowerboundcanbesharpened:byexpressing 1 theGaussianinputX asthesumoftwoindependentGaussianvariablesX1 andX2 andbyapplyingMe´dard’slower v bound first to bound the mutual information between X and Y while treating X as noise, and by applying the 0 1 2 2 lower bound then to bound the mutual information between X2 and Y while assuming X1 to be known, we obtain 1 a lower bound on the capacity that is strictly larger than Me´dard’s lower bound. We then generalize this approach 6 . to an arbitrary number L of layers, where X is expressed as the sum of L independent Gaussian random variables 1 0 of respective variances P(cid:96), (cid:96)=1,...,L summing up to P. Among all such rate-splitting bounds, we determine the 3 supremum over power allocations P and total number of layers L. This supremum is achieved for L → ∞ and 1 (cid:96) : givesrisetoananalyticallyexpressiblelowerboundontheGaussian-inputmutualinformation.ForGaussianfading, v i this novel bound is shown to be asymptotically tight at high signal-to-noise ratio (SNR), provided that the variance X of the channel estimation error H−Hˆ tends to zero as the SNR tends to infinity. r a A. Pastore and J. R. Fonollosa have been supported by the Ministerio de Econom´ıa y Competitividad of Spain (TEC2010-19171 and CONSOLIDER-INGENIO2010CSD2008-00010COMONSENS)andGeneralitatdeCatalunya(2009SGR-1236).T.Kochhasbeensupported bytheMinisteriodeEconom´ıayCompetitividadofSpain(TEC2009-14504-C02-01DEIPROandCONSOLIDER-INGENIO2010CSD2008- 00010COMONSENS).ThematerialinthispaperwaspresentedinpartattheIEEE27thConventionofElectricalandElectronicsEngineers inIsrael,Eilat,Israel,November14–17,2012. A.PastoreandJ.R.FonollosaarewiththeDepartmentofSignalTheoryandCommunications,UniversitatPolite`cnicadeCatalunya,Jordi Girona1-3,BuildingD5,08034Barcelona,Spain(email: adriano.pastore,javier.fonollosa @upc.edu). { } T. Koch is with the Signal Theory and Communications Department, Universidad Carlos III de Madrid, 28911 Legane´s, Spain (email: [email protected]). January28,2013 DRAFT 2 I. INTRODUCTIONANDCHANNELMODEL We consider a single-antenna memoryless fading channel with imperfect channel-state information (CSI), whose time-k channel output Y[k] corresponding to a time-k channel input X[k] = x ∈ C (where C denotes the set of complex numbers) is given by Y[k]=(cid:0)Hˆ[k]+H˜[k](cid:1)x+Z[k], k ∈Z (1) (with Z denoting the set of integers). Here, the noise {Z[k]}k∈Z is a sequence of independent and identically distributed(i.i.d.),zero-mean,circularly-symmetric,complexGaussianrandomvariableswithvarianceE(cid:2)|Z[k]|2(cid:3)= N . The fading pair (cid:8)(Hˆ[k],H˜[k])(cid:9) is an arbitrary sequence of i.i.d. complex-valued random variables whose 0 k∈Z means and variances satisfy the following conditions: • Hˆ[k] has mean µ and variance Vˆ; • conditioned on Hˆ[k]=hˆ, the random variable H˜[k] has zero mean and variance V˜(hˆ), i.e., E(cid:2)H˜[k](cid:12)(cid:12)Hˆ[k]=hˆ(cid:3)=0 (2a) E(cid:2)|H˜[k]|2 (cid:12)(cid:12)Hˆ[k]=hˆ(cid:3)(cid:44)V˜(hˆ). (2b) We assume that the joint sequence (cid:8)(Hˆ[k],H˜[k])(cid:9)k∈Z, the noise sequence {Z[k]}k∈Z and the input sequence {X[k]}k∈Z are all three mutually independent. We further assume that the receiver is cognizant of the realization of {Hˆ[k]}k∈Z, but the transmitter is only cognizant of its distribution. We finally assume that both the transmitter and receiver are cognizant of the distributions of {H˜[k]}k∈Z and {Z[k]}k∈Z but not of their realizations. The Hˆ[k] can be viewed as an estimate of the channel fading coefficient H[k](cid:44)Hˆ[k]+H˜[k]. (3) Accordingly, H˜[k] can be viewed as the channel estimation error. From this perspective, the condition (2a) is, for example, satisfied when Hˆ[k] is the minimum mean-square error (MMSE) estimate of H[k] from some receiver side information. When H˜[k]=0 almost surely, we shall say that the receiver has perfect CSI. The capacity of the above channel (1) under the average-power constraint P on the channel inputs is given by [1] C(P)=supI(X;Y|Hˆ) (4) where the supremum is over all distributions of X satisfying E[|X|2]≤P. Here and throughout the paper we omit the time indices k wherever they are immaterial. Since (4) is difficult to evaluate, even if Hˆ and H˜ are Gaussian, it is common to assess C(P) using upper and lower bounds. A widely-used lower bound on C(P) is due to Me´dard [2]: (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2P C(P)≥E log 1+ (cid:44)R (P). (5) V˜(Hˆ)P +N M 0 January28,2013 DRAFT 3 This lower bound follows from (4) by choosing X to be zero-mean, variance-P Gaussian and by upper-bounding the differential entropy of X conditioned on Y and Hˆ as h(X|Y,Hˆ)=h(X−αY|Y,Hˆ) ≤h(X−αY|Hˆ) ≤E(cid:104)log(cid:16)πeE(cid:2)|X−αY|2 (cid:12)(cid:12)Hˆ(cid:3)(cid:17)(cid:105) (6) for any α∈C. Here the first inequality follows because conditioning cannot increase entropy, and the subsequent inequality follows because the Gaussian distribution maximizes differential entropy for a given second moment [3, Th. 9.6.5]. By expressing the mutual information I(X;Y|Hˆ) as I(X;Y|Hˆ)=h(X)−h(X|Y,Hˆ) (7) and by choosing α so that αY is the linear MMSE estimate of X, the lower bound (5) follows. WhenthereceiverhasperfectCSIsothatE[V˜(Hˆ)]=0,thelowerboundR (P)isequaltothechannelcapacity M (cid:20) (cid:18) |H|2P(cid:19)(cid:21) C (P)=E log 1+ . (8) coh N 0 Consequently, for perfect CSI the lower bound (5) is tight. In contrast, when the receiver has imperfect CSI and V˜(Hˆ) and Hˆ do not depend on P, the lower bound (5) is loose. In fact, in this case the lower bound R (P) is bounded in P, whereas the capacity C(P) is known to be M unbounded. For instance, if H˜ is of finite differential entropy, then the capacity has a double-logarithmic growth in P [4].1 This boundedness of R (P) is not due to the inequalities in (6) being loose, but is a consequence of choosing M a Gaussian channel input. Indeed, if H˜ is of finite differential entropy, then a Gaussian input X achieves [5, G Proposition 6.3.1], [4, Lemma 4.5] lim I(X ;Y|Hˆ)≤γ+log(cid:0)πeE(cid:2)|Hˆ +H˜|2(cid:3)(cid:1)−h(H˜) (9) G P→∞ where γ ≈ 0.577 denotes Euler’s constant and where lim denotes the limit superior. Nevertheless, even if we restrict ourselves to Gaussian inputs, the lower bound I(X ;Y|Hˆ)≥R (P) (10) G M isnottight.Asweshallsee,byusingarate-splitting(orsuccessive-decoding)approach,thislowerbound(10)canbe sharpened:weshowinSectionIIthat,byexpressingtheGaussianinputX asthesumoftwoindependentGaussian G randomvariablesX andX ,andbyfirstapplyingtheboundingtechniquesketchedin(6)–(7)toI(X ;Y|Hˆ)(thus 1 2 1 treating H˜X as noise) and then using the same bounding technique to lower-bound I(X ;Y|Hˆ,X ), we obtain 2 2 1 1ThisresultcanbegeneralizedtoshowthatifE[log Hˆ +H˜ 2]> holds,thenthecapacitygrowsatleastdouble-logarithmicallywith | | −∞ P. January28,2013 DRAFT 4 a lower bound on the Gaussian-input mutual information (and thus also on the capacity) that is strictly larger than the conventional bound R (P). M In Section III, we expand this approach by expressing X as the sum of L ≥ 2 independent Gaussian random variables X , (cid:96) = 1,...,L and by applying the bounding technique from (6)–(7) first to I(X ;Y|Hˆ), then to (cid:96) 1 I(X ;Y|Hˆ,X ), and so on. We show that the so obtained lower bound is strictly increasing in L (provided that 2 1 we optimize the sum of bounds over the powers P = E[|X |2], (cid:96) = 1,...,L), and we determine its limit as L (cid:96) (cid:96) tends to infinity. The so-obtained lower bound permits an analytic expression. In the remainder of this paper, we shall refer to (cid:96) as a layer and to L as the number of layers. In Section IV, we show that when, conditioned on Hˆ, the estimation error H˜ is Gaussian, and when its average variance (averaged over Hˆ) tends to zero as the SNR tends to infinity, the new lower bound tends to the Gaussian- input mutual information I(X ;Y|Hˆ) as the SNR tends to infinity. For non-Gaussian fading, we show that, at G high SNR, the difference between I(X ;Y|Hˆ) and our lower bound is upper-bounded by the difference of the G logarithms of the variance of H˜ and of its its entropy power. The rest of the paper is organized as follows. In Section V we discuss the connection of our results with similar results obtained in the mismatched-decoding literature. In Sections VI and VII we provide the proofs of the main results. And in Section VIII we conclude the paper with a summary and discussion. II. RATE-SPLITTINGWITHTWOLAYERS For future reference, we state Me´dard’s lower bound (5) in a slightly more general form in the following proposition. Proposition 1 (Me´dard [2]). Let S be a zero-mean, circularly-symmetric, complex Gaussian random variable of variance P. Let A and B be complex-valued random variables of finite second moments, and let C be an arbitrary random variable. Assume that S is independent of (A,C), and that, conditioned on (A,C), the variables S and B are uncorrelated. Then (cid:20) (cid:18) |A|2P (cid:19)(cid:21) I(S;AS+B|A,C)≥E log 1+ (11) V (A,C) B where V (a,c) denotes the conditional variance of B conditioned on (A,C)=(a,c). B Proof: See Appendix A. Using Proposition 1, we show that, for imperfect CSI and E[|Hˆ|2] > 0, rate splitting with two layers strictly improves the lower bound (10). Indeed, let X and X be independent, zero-mean, circularly-symmetric, complex 1 2 Gaussian random variables with respective variances P and P (satisfying P +P =P) such that X =X +X . 1 2 1 2 1 2 By the chain rule for mutual information, we obtain I(X;Y|Hˆ)=I(X ,X ;Y|Hˆ) 1 2 =I(X ;Y|Hˆ)+I(X ;Y|Hˆ,X ). (12) 1 2 1 January28,2013 DRAFT 5 By replacing the variables A, B, C and S in Proposition 1 with A←Hˆ, B ←HˆX +H˜X+Z, C ←0, S ←X 2 1 it follows that the first term on the right-hand side (RHS) of (12) is lower-bounded as (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2P I(X ;Y|Hˆ)≥E log 1+ 1 (cid:44)R (P ,P ). (13) 1 (cid:0)|Hˆ|2+V˜(Hˆ)(cid:1)P +V˜(Hˆ)P +N 1 1 2 2 1 0 Similarly, by replacing A, B, C, S in Proposition 1 with A←Hˆ, B ←HˆX +H˜X+Z, C ←X , S ←X 1 1 2 we obtain for the second term on the RHS of (12) (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2P I(X ;Y|Hˆ,X )≥E log 1+ 2 (cid:44)R (P ,P ). (14) 2 1 V˜(Hˆ)(|X |2+P )+N 2 1 2 1 2 0 Noting that, for every α>0, the function x(cid:55)→log(1+α/x) is strictly convex in x≥0, it follows from Jensen’s inequality that the RHS of (14) is lower-bounded as (cid:34) (cid:32) (cid:33)(cid:35) (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2P |Hˆ|2P E log 1+ 2 ≥E log 1+ 2 (15) V˜(Hˆ)(|X |2+P )+N V˜(Hˆ)(P +P )+N 1 2 0 1 2 0 with the inequality being strict except in the trivial cases where P = 0, P = 0, or if, with probability one, at 1 2 least one of |Hˆ| and V˜(Hˆ) is zero.2 Thus, combining (12)–(15), we obtain (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2P R (P ,P )+R (P ,P )≥E log 1+ (16) 1 1 2 2 1 2 V˜(Hˆ)P +N 0 demonstratingthat,whenthereceiverhasimperfectCSI,ratesplittingwithtwolayersstrictlyimprovesthecapacity and mutual information lower bound (5) (except in trivial cases). Figure 1 compares the two-layer bound R (P ,P )+R (P ,P ) with R (P) (dashed line) as a function of 1 1 2 2 1 2 M P /P, for Hˆ and H˜ being circularly-symmetric Gaussian with parameters µ = 0, Vˆ = 1, V˜(hˆ) = 1 for hˆ ∈ C, 1 2 2 P = 10, and N = 1. The figure confirms our above observation that, when the receiver has imperfect CSI and 0 P > 0 and P > 0, rate splitting with two layers outperforms R (P) (5). In this example, the optimal power 1 2 M allocation is approximately at P ≈0.78P and P ≈0.22P. 1 2 Onemightwonderwhetherextendingourapproachtomorethantwolayerscanfurtherimprovethelowerbound. As we shall see in the following section, it does. In fact, for every power P we show that, once that the power is optimally allocated across layers, the rate-splitting lower bound is strictly increasing in the number of layers. 2We may write this as Pr(cid:8)Hˆ V˜(Hˆ) = 0(cid:9) = 1. For example, this occurs when the receiver has perfect CSI, in which case V˜(Hˆ) = 0 · almostsurely. January28,2013 DRAFT 6 0.83 R1(P1,P P1)+R2(P1,P P1) − − 0.82 RM(P) e] s u el 0.81 n n a ch 0.8 s/ bit n [0.79 o ati m 0.78 or nf al i0.77 u ut m 0.76 0.75 0 0.2 0.4 0.6 0.8 1 P1/P Fig.1. Comparisonofthe2-layerlowerboundR1(P1,P P1)+R2(P1,P P1)(continuousline)withMe´dard’slowerboundRM(P) − − (dashedline)asafunctionofthepowerfractionP1/P assignedtothefirstlayer. III. RATE-SPLITTINGWITHLLAYERS Let X ,...,X be independent, zero-mean, circularly-symmetric, complex Gaussian random variables with 1 L respective variances P ,...,P satisfying 1 L L (cid:88) P = P (17) (cid:96) (cid:96)=1 such that L (cid:88) X = X . (cid:96) (cid:96)=1 Let the cumulative power Q be given by k k (cid:88) Q = P . (18) k (cid:96) (cid:96)=1 We denote the collection of cumulative powers as Q=(Q ,...,Q ) (19) 1 L and refer to it as an L-layering. It follows from the chain rule for mutual information that L I(cid:0)XL;Y|Hˆ(cid:1)=(cid:88)I(cid:0)X ;Y|X(cid:96)−1,Hˆ(cid:1) (20) (cid:96) (cid:96)=1 whereweusetheshorthandAN todenotethesequenceA ,...,A ,andA0 denotestheemptysequence.Applying 1 N Proposition 1 by replacing A, B, C, S with the respective A←Hˆ, B ←Hˆ (cid:88)X +H˜X+Z, C ←X(cid:96)−1, S ←X , (cid:96)(cid:48) (cid:96) (cid:96)(cid:48)(cid:54)=(cid:96) January28,2013 DRAFT 7 we can lower-bound the (cid:96)-th summand on the RHS of (20) as I(cid:0)X ;Y|X(cid:96)−1,Hˆ(cid:1)≥E(cid:104)log(cid:0)1+Γ (X(cid:96)−1,Hˆ)(cid:1)(cid:105) (cid:96) (cid:96),Q (cid:44)R [Q] (21) (cid:96) where |Hˆ|2P Γ (X(cid:96)−1,Hˆ)(cid:44) (cid:96) (22) (cid:96),Q V˜(Hˆ)(cid:12)(cid:12)(cid:80)Xi(cid:12)(cid:12)2+V˜(Hˆ)P(cid:96)+(cid:0)|Hˆ|2+V˜(Hˆ)(cid:1)(cid:80)Pi+N0 i<(cid:96) i>(cid:96) and where the last line in (21) should be viewed as the definition of R [Q]. Defining (cid:96) R[Q](cid:44)R [Q]+...+R [Q] (23) 1 L we obtain from (20) and (21) the lower bound I(X;Y|Hˆ)=I(cid:0)XL;Y|Hˆ(cid:1)≥R[Q]. (24) Note that Q = Q implies P = 0, which in turn implies R [Q] = 0. Without loss of optimality, we can (cid:96)−1 (cid:96) (cid:96) (cid:96) therefore restrict ourselves to L-layerings satisfying 0<Q <...<Q =P. (25) 1 L We shall denote the set of such L-layerings by Q(P,L). Let R(cid:63)(P,L) denote the lower R[Q] optimized over all Q∈Q(P,L), i.e., R(cid:63)(P,L)(cid:44) sup R[Q]. (26) Q∈Q(P,L) In the following, we show that R(cid:63)(P,L) is monotonically increasing in L. To this end, we need the following lemma. Lemma 2. Let L(cid:48) >L , and let the L-layering Q∈Q(P,L) and the L(cid:48)-layering Q(cid:48) ∈Q(P,L(cid:48)) satisfy {Q ,...,Q }⊂{Q(cid:48),...,Q(cid:48) }. (27) 1 L 1 L(cid:48) Then R[Q]≤R[Q(cid:48)] (28) with equality if, and only if, Pr(cid:8)Hˆ ·V˜(Hˆ)=0(cid:9)=1. Proof: See Appendix B. Theorem 3. Assume that Pr(cid:8)Hˆ ·V˜(Hˆ)=0(cid:9)<1. Then, R(cid:63)(P,L) is monotonically nondecreasing in L. Proof: For every L-layering Q∈Q(P,L), we can construct an (L+1)-layering Q(cid:48) ∈Q(P,L+1) satisfying Q ⊂ Q(cid:48) by adding (Q +Q )/2 to Q. Together with Lemma 2, this implies that for every Q ∈ Q(P,L) there 1 2 exists a Q(cid:48) ∈Q(P,L+1) such that R[Q]<R[Q(cid:48)], from which the theorem follows upon maximizing both sides of the inequality over all layerings Q∈Q(P,L) and Q(cid:48) ∈Q(P,L+1), respectively. January28,2013 DRAFT 8 It follows from Theorem 3 that the best lower bound, optimized over all layerings of fixed sum-power P R(cid:63)(P)(cid:44) sup sup R[Q]= supR(cid:63)(P,L) (29) L∈NQ∈Q(P,L) L∈N (where N denotes the positive integers) is approached by letting the number of layers L tend to infinity. An explicit expression for R(cid:63)(P) is provided by the following theorem. Theorem 4. For a given input power P, the supremum of all rate-splitting lower bounds R[Q] over Q∈Q(P,L) and L∈N, is given by R(cid:63)(P)= lim R(cid:63)(P,L) L→∞ (cid:34) (cid:32) (cid:33)(cid:35) |Hˆ|2 V˜(Hˆ)(W −1)−|Hˆ|2 =E Θ (30) |Hˆ|2+V˜(Hˆ)+ N0 |Hˆ|2+V˜(Hˆ)+ N0 P P where   1 log(1+x), if −1<x<0 or x>0 Θ(x)(cid:44) x (31)  1, if x=0 and where W is independent of Hˆ and exponentially distributed with mean 1. Proof: The proof of Theorem 4 is given in Section VI. Remark 1. The proof of Theorem 4 hinges on the observation that the supremum R(cid:63)(P) is approached by a uniform layering (cid:18) (cid:19) P P P U(P,L)(cid:44) ,2 ,...,(L−1) ,P (32) L L L when the number of layers L is taken to infinity. While this layering was chosen for mathematical convenience, any other layering would also do, provided that some regularity conditions are met. For example, one can show that for any Lipschitz-continuous monotonic bijection F: [0,P]→[0,P], we have lim R(cid:2)F(cid:0)U(P,L)(cid:1)(cid:3)= lim R(cid:2)U(P,L)(cid:3)=R(cid:63)(P) (33) L→∞ L→∞ (cid:0) (cid:1) (cid:8) (cid:9) where F U(P,L) = F(P/L),F(2P/L),...,F(P) . To assess the tightness of the derived lower bounds, we consider two upper bounds on the mutual information for Gaussian inputs. The first upper bound is the capacity when the receiver has perfect CSI [cf. (8)] and follows by noting that improving the CSI at the receiver does not reduce mutual information: (cid:20) (cid:18) |H|2P(cid:19)(cid:21) I(X ;Y|Hˆ)≤E log 1+ (cid:44)C (P). (34) G N coh 0 The second upper bound is given by (cid:34) (cid:32) (cid:33)(cid:35) V˜(Hˆ)P +N I(X ;Y|Hˆ)≤R (P)+E log 0 G M Φ˜(Hˆ)PW +N 0 (cid:44)I (P) (35) upper January28,2013 DRAFT 9 whereW isanexponentiallydistributedrandomvariableofmean1,andwhereΦ˜(hˆ)denotestheconditionalentropy power of H˜, conditioned on Hˆ =hˆ:3  1  eh(H˜|Hˆ=hˆ), if h(H˜|Hˆ =hˆ)>−∞ Φ˜(hˆ)= πe (36)  0, otherwise. This upper bound follows from expanding the mutual information as h(Y|Hˆ) − h(Y|X ,Hˆ), upper-bounding G h(Y|Hˆ) by the entropy of a Gaussian variable of same variance, and lower-bounding h(Y|X ,Hˆ) using the G entropy-power inequality [6, Theorem 6]. Theupperbound(35)waspreviouslyused,e.g.,in[7,Equation(42)],[8,Lemma2]forGaussianfading,inwhich case the entropy-power inequality is tight and the entropy power equals the conditional variance, i.e., Φ˜(hˆ)=V˜(hˆ) for hˆ ∈C. 2 1.8 e] s el u1.6 n an1.4 h c s/1.2 bit on [ 1 ati m0.8 or nf al i0.6 Ccoh(P) utu0.4 Iupper(P) m R⋆(P) 0.2 R⋆(P,2) RM(P) 0 −10 −5 0 5 10 15 20 25 30 P/N0 [dB] Fig.2. ComparisonbetweenseveralboundsonthecapacityandtheGaussian-inputmutualinformation. In Figure 2, several bounds on the mutual information I(X ;Y|Hˆ) for Gaussian inputs are plotted against the G SNR on a range from −10dB to 30dB. From top to bottom, we have the coherent capacity (34); the upper bound (35); the supremum R(cid:63)(P) of all rate-splitting bounds (Theorem 4); the two-layer rate-splitting bound with optimizedpowerallocationR(cid:63)(P,2);andMe´dard’slowerboundR (P).Thegrey-shadedareaindicatestheregion M in which the curve of the exact Gaussian-input mutual information I(X ;Y|Hˆ) is located. For this simulation, we G have chosen Hˆ and H˜ to be independent and complex circularly-symmetric Gaussian with parameters µ = 0, 3Wedefineh(H˜ Hˆ =hˆ)= iftheconditionaldistributionofH˜,conditionedonHˆ =hˆ,isnotabsolutelycontinuouswithrespectto | −∞ theLebesguemeasure. January28,2013 DRAFT 10 Vˆ = 1, and V˜(hˆ)= 1, hˆ ∈C. Observe that the proposed rate-splitting approach achieves the most significant rate 2 2 gains at high SNR. In this simulation, the increase R(cid:63)(P)−R (P) is approximately 0.28 bits per channel use for M large P. IV. ASYMPTOTICALLYOPTIMALCSI Thenumericalexampleconsideredintheprevioussection(seeFigure2)assumesthatV˜(Hˆ)andHˆ donotdepend on the SNR P/N . However, in practical communication systems, the channel estimation error—as measured by 0 the mean error variance E[V˜(Hˆ)]—typically decreases as the SNR increases. In this section, we investigate the high-SNRbehavioroftheratesachievablewithandwithoutratesplittingwhenE[V˜(Hˆ)]vanishesastheSNRtends to infinity. A. Asymptotic Tightness Let us consider a family of joint distributions of (Hˆ,H˜) parametrized by ρ=P/N . To make this dependence 0 on ρ explicit, we shall write the two channel components as Hˆ and H˜ , and the respective variances as Vˆ and ρ ρ ρ V˜ (Hˆ ). Similarly, we shall write the entropy power, defined in (36), as Φ˜ (Hˆ ). We further adapt the notation to ρ ρ ρ ρ express Me´dard’s lower bound, the rate-splitting lower bounds (26) and (29), and the upper bounds (34) and (35) as functions of ρ, namely, R (ρ), R(cid:63)(ρ,L), R(cid:63)(ρ), C (ρ), and I (ρ). M coh upper We assume that H =Hˆ +H˜ does not depend on ρ and is normalized: ρ ρ E(cid:2)|Hˆ |2(cid:3)+E(cid:2)V˜ (Hˆ )(cid:3)=1. (37) ρ ρ ρ WefurtherassumethatthevarianceoftheestimationerrorH˜ isnotlargerthanthevarianceofH,i.e.,V˜ (hˆ )≤1 ρ ρ ρ for every hˆ ∈C. ρ Theorem 5. Let Hˆ , V˜ (Hˆ ) and Φ˜ (Hˆ ) satisfy ρ ρ ρ ρ ρ lim E(cid:2)V˜ (Hˆ )(cid:3)=0 (38a) ρ ρ ρ→∞ lim E(cid:2)|Hˆ |4(cid:3)<∞ (38b) ρ ρ→∞ (cid:40) (cid:41) V˜ (ξ) lim sup ρ ≤M (38c) ρ→∞ ξ∈C Φ˜ρ(ξ) for some finite constant M, where we define 0/0(cid:44)1 and a/0(cid:44)∞ for every a>0. Then, we have (cid:110) (cid:111) lim I(X ;Y|Hˆ )−R(cid:63)(ρ) ≤log(M). (39) G ρ ρ→∞ Proof: See Section VII. If H˜ is Gaussian, then we have V˜ (hˆ )=Φ˜ (hˆ ) for hˆ ∈C and the choice M =1 satisfies (38c). Thus, for ρ ρ ρ ρ ρ ρ Gaussian fading the lower bound R(cid:63)(ρ) is asymptotically tight: January28,2013 DRAFT

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.