ebook img

On the Degrees-of-Freedom of the K-User Gaussian Interference Channel PDF

0.33 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On the Degrees-of-Freedom of the K-User Gaussian Interference Channel

On the Degrees-of-Freedom of the K-User Gaussian Interference Channel Raul Etkin and Erik Ordentlich HP Laboratories 9 Palo Alto, CA 0 0 2 January 13, 2009 n a J 3 Abstract 1 The degrees-of-freedomof a K-user Gaussian interference channel (GIFC) has been defined ] T to be the multiple of (1/2)log2P at which the maximum sum of achievable rates grows with I . increasingP. Inthis paper,weestablishthatthe degrees-of-freedomofthreeormoreuser,real, s c scalar GIFCs, viewed as a function of the channel coefficients, is discontinuous at points where [ all of the coefficients are non-zero rational numbers. More specifically, for all K > 2, we find 1 a class of K-user GIFCs that is dense in the GIFC parameter space for which K/2 degrees-of- v 5 freedom are exactly achievable, and we show that the degrees-of-freedom for any GIFC with 9 non-zero rational coefficients is strictly smaller than K/2. These results are proved using new 6 1 connections with number theory and additive combinatorics. . 1 0 9 1 Introduction 0 : v i X The time-invariant, real, scalar K-user Gaussian interference channel (GIFC), as introduced in [1], r involves K transmitter-receiver pairs in which each transmitter attempts to communicate a uni- a formlydistributed,finite-valuedmessagetoitscorrespondingreceiverbysendingasignalcomprised ofnrealnumbers. Eachreceiverobservesacomponent-wiselinearcombinationofpossiblyallofthe transmitted signals plus additive memoryless Gaussian noise, and seeks to decode, with probability close to one, the message of its corresponding transmitter, in spite of the interfering signals and noise. The time averages of the squares of the transmitted signal values are required to not exceed certain power constraints. A K-tuple of rates (R ,...,R ) is said to be achievable for a GIFC 1 K if the transmitters can increase the sizes of their message sets as 2nRi with the signal length n, and signal in such a way that the power constraints are met and the receivers are able to correctly decode their corresponding messages with probability converging to 1, as n grows to infinity. The set of all achievable K-tuples of rates is known as the capacity region of the GIFC. Determining it, as a function of the channel coefficients (specifying the linear combinations mentioned above), 1 power constraints, and noise variances, has been an open problem in information theory for over 30 years. Acompletesolutionforeventhetwo-usercase,whichhasreceivedthemostattention todate,isstill out of reach. The best known coding scheme for two users is that presented in [2]. In some ranges of channel coefficients, such as for strong interference, the capacity region is completely known for two users [1]. Still for other ranges, the maximum achievable sum-of-rates is known [3, 4, 5, 6]. For the general two-user case, the strongest known result is that of [7], which determines the capacity region to within a 1/2 bit margin (1 bit for the complex case) using a carefully chosen version of the scheme of [2], and a new genie-aided outerbound. The case of K > 2 users has, until very recently, received less attention. Much of the recent effort on K > 2, beginning with [8] and continuing in e.g., [9, 10] has focused on characterizing the growthofthecapacityregioninthelimitofincreasingsignal-to-noiseratio(SNR)corresponding,for example,tofixingthenoisevariancesandchannelcoefficientsandlettingthepowerconstraintstend to infinity. Specific attention has been directed at the growth of the maximum sum of achievable rates. Iftherewerenointerference,themaximumachievableratecorrespondingtoeachtransmitter- receiver pair would grow like (1/2)log P in the limit of increasing power, which follows from the 2 well known formula of (1/2)log (1+P/N) for the capacity of a single user additive Gaussian noise 2 channelwithpowerconstraintP andnoisevarianceN. Thus,themaximumsumofachievablerates would grow as (K/2)log P if there were no interference. This motivates the expectation that, in 2 thegeneralcase,themaximumsumofachievablerateswouldgrowas(d/2)log P forsomeconstant 2 d K, depending on the channel coefficients, where d has been dubbed the degrees-of-freedom ≤ of the underlying GIFC. Although determining d for a given GIFC is, in principle, simpler than determining the capacity region, it has turned out to be a difficult problem in its own right, for K > 2.1 A positive development in the study of the degrees-of-freedom of GIFCs with more than two users has been the discovery of a new coding technique known as interference alignment, which involves carefully choosing the transmitted signals so that the interfering signals “align” benignly at each receiver [9]. Interference alignment has been shown, under some conditions which we summarize below, to achieve nearly d = K/2 degrees-of-freedom, which is half of the degrees-of-freedom in the case of no interference at all. Interference alignment is not possible to implement for two users and its discovery thus had to wait until the focus shifted to more users. Another new phenomenon in networkinformationtheorythathasrecentlyemergedasthenumberofusersstudiedwasincreased, is the technique of indirect decoding, which is crucial for achieving the capacity region of certain three-user broadcast channels [12]. Again, this technique is not relevant in the two-user case, and could not have been discovered in the study thereof. In this paper, we find a new information theoretic phenomenon concerning interference channels that is not manifest in the two-user case. In particular, we find that the degrees-of-freedom (and 1The degrees-of-freedom is known to be1 for all two-user GIFCs, unless thereis nointerference [8]. 2 therefore the capacity region at high signal-to-noise ratio) of real, scalar GIFCs with K > 2 users is very sensitive to whether the channel coefficients determining the linear combinations of signals at each receiver are rational or irrational numbers. Next, we formally explain our results and their significance in the context of the growing literature on K > 2 user GIFCs. We shall use a matrix H to denote the direct and cross gains of a time invariant, real, scalar K-user (GIFC) [1] with the (i,j)-th entry h specifying the channel gain from transmitter i to i,j receiver j. Thus, the signal observed by receiver j 1,...,K at time index t = 1,2,... is given ∈ { } by z + K x h where x is the real valued signal of transmitter i 1,...,K at time t j,t i=1 i,t i,j i,t ∈ { } and z is additive Gaussian noise with variance σ2, independent across time and users. Fixing a j,t P j block length n, the transmitted signals x are required to satisfy the average power constraints i,t { } n x2 nP for some collection of powers P ,...,P . For H RK×K, and σ,P RK, we t=1 i,t ≤ i 1 K ∈ ∈ + let (H,σ,P) denote the capacity region of a GIFC with gain matrix H, receiver noise variances P C given by the corresponding components of σ, and average (per codeword) power constraints given by the components of P. Following [8], we define the degrees-of-freedom of H as maxR∈C(H,1,P1)1tR DoF(H) = limsup , (1) (1/2)log P P→∞ 2 where1denotesthevector ofallones. Thedegrees-of-freedom ofaGIFCcharacterizes thebehavior of the maximum achievable sum rate as the SNR tends to infinity, with the gain matrix fixed. A fully connected GIFC is one for which h = 0 for all i and j. It was shown in [8] that for i,j 6 fully connected H, DoF(H) K/2. If H is not fully connected, the degrees-of-freedom can be ≤ as high as K, such as when H is the identity matrix where all cross gains are zero. Little was known about tightness of the K/2 bound for K > 2 until it was shown in [9] that for vector GIFCs and an appropriate generalization of DoF() to include a normalization by the input/outputvector · dimension, the degrees-of-freedom of “almost all” fully connected vector GIFCs approaches K/2 when the vector dimension tends to infinity.2 In addition, an example of a fully connected two- dimensional vector GIFC achieving exactly K/2 degrees-of-freedom was also given in [9]. The key tool introduced in [9] to establish these results is the technique of interference alignment, which involves the transmitters signaling over linear subspaces that, after component-wise scaling by the cross gains, align into interfering subspaces which are linearly independent with the directly received subspaces, allowing for many interference free dimensions over which to communicate. For real, scalar GIFCs, it was shown in [10], using a different type of interference alignment, that the degrees-of-freedom ofcertain fullyconnected GIFCs also approaches K/2 whenthecrossgains tend to zero. Yet a different type of interference alignment is used in [11] to findnew achievable rates for a non-fully connected GIFC in which interference occurs only at one receiver. To our knowledge, the problem of determining or computing the degrees-of-freedom of general GIFCs is still open. As in [10], in this paper we consider only fully connected scalar, real GIFCs and establish the following results on the degrees-of-freedom. 2The K/2 boundof [8]extendsto the fully connected vector case as well. 3 Theorem 1 If all diagonal components of a fully connected H are irrational algebraic numbers and all off-diagonal components are rational numbers then DoF(H) = K/2. Theorem 2 ForK > 2, ifallelementsofafullyconnectedH are rationalnumbersthenDoF(H) < K/2. The following corollary is then immediate from Theorems 1 and 2, and the well known fact that irrational algebraic numbers are dense in the real numbers. Corollary 1 For K > 2, the function DoF(H) is discontinuous at all fully connected H with rational components. Theorem 1 demonstrates the existence of fully connected, real K-user GIFCs with exactly K/2 degrees-of-freedom. In contrast to the result of [10], Theorem 1 is non-asymptotic (in H). The underlyingachievability schemeisbasedonaninterferencealignmentphenomenonthatdiffersfrom the ones used in [9] and [10], and relies on number theoretic lower bounds on the approximability of irrational algebraic numbers by rationals. Theorem 2 reveals a surprising limitation on DoF(H) when the components are non-zero rational numbers (up to arbitrary pre-postmultiplication by diagonal matrices - see Lemma 1 in Section 3). In this case, DoF(H) is strictly bounded away from K/2. Previously known techniques for finding outer-boundsto thecapacity regions of GIFCs, such as cooperative encoding and decoding [13, 14], genieaideddecoding[7,15,16],andmultipleaccessbounds[1,17]arenotsensitivetotherationality of the channel parameters and hence do not suffice to establish Theorem 2. Instead, our proof of thistheoremisbasedonanewconnectionbetweenGIFCswithrationalH andresultsfromadditive combinatorics [20], a branch of combinatorics that is concerned with the cardinalities of sum sets, or sets obtained by adding (assuming an underlying group structure) any element of a set A to any element of a set B. The remainder of the paper is organized as follows. The next section clarifies some notation and gives the formal definition of the capacity region of a GIFC that will apply in this paper. In Section 3, we present the proof of Theorem 1. This is followed by the proof of Theorem 2 in Section 4, which further consists of subsections collecting various intermediate results. Each of these sections is prefaced with a high level outline of the respective proofs. In Section 5, we determine lower and upper bounds on DoF(H) for a simple three-user rational H by improving on the scheme of [10], and evaluating an upperboundimplicit intheproof of Theorem 2. We conclude in Section 6 with some final observations and directions for future work. 4 2 Notation and definitions We adopt the usual notation for the information theoretic quantities of discrete and differential entropy (resp. H(X) and h(X)), and mutual information (I(X;Y)), which shall all be measured in bits (i.e. involve logarithms to the base two) [18]. We shall use the standard notation x and x ⌈ ⌉ ⌊ ⌋ to respectively denote the smallest integer not smaller than x and the greatest integer not larger than x. The cardinality of a set shall be denoted as . A |A| Next, we review the definition of the capacity region of a K-user GIFC with power constraints P = (P ,...,P ) and noise variances σ = (σ2,...,σ2 ) that will apply in this paper. Fixing a 1 K 1 K block length n and a rate-tuple R ,...,R , the random message W of the i-th transmitter is 1 K i assumed to be uniformly distributed in the set =△ 1,...,2⌈nRi⌉ . The messages are further i W { } assumed to be independent from one user to the next. A coding scheme consists of K encoding functions f ,...,f where f maps into the n-dimensional ball of radius √nP of real vectors, 1 K i i i W the components of which specify the signal value x that the i-th transmitter will send at each i,t time index.3 The set f (1),f (2),...,f (2⌈nRi⌉) constitutes the codebook of transmitter i. There i i i { } is also a corresponding set of K decoding functions g ,...,g where g maps n-dimensional real 1 K i vectors into the message set . The function g is applied by receiver i to the n received signal i i W values yn = y ,...,y , which, as specified in the introduction, are formed as a component-wise i i,1 i,n linear combination, according the gain matrix H, of the transmitted signals and Gaussian noise. Definition 1 The capacity region (H,σ,P) of the GIFC is defined as the set of rate-tuples C R ,...,R for which there exists a sequence of block length n message sets and power-constrained 1 K coding schemes satisfying lim max Pr(W = g (yn)) = 0, where the probability of error n→∞ 1≤i≤K i i 6 is taken with respect to the distribution induced by the random messages, the coding scheme, and the channel, as specified above. The degrees-of-freedom DoF(H) of a GIFC, as defined in (1) above, will be assumed to be based on this formal definition of (H,σ,P). C 3 Real, scalar GIFCs with exactly K/2 degrees-of-freedom In this section, we prove Theorem 1 demonstrating the existence of fully connected, real, scalar K-user GIFCs with exactly K/2 degrees-of-freedom. An outline of the proof is as follows. First, we prove a simple lemma (which will also be useful in the next section) showing that DoF(H) = DoF(D HD ) for any diagonal matrices D and D with positive diagonal components. This, in t r t r turn, implies that we can transform any H satisfying the assumptions of Theorem 1 to one with 3Forsimplicity,inthispaper,weformallyadoptapercodewordaveragepowerconstraint,asopposedtothemore conventional expected average power constraint. Wenote that the degrees-of-freedom of a GIFC can beshown to be thesame underboth typesof power constraints. 5 irrational, algebraic numbers along the diagonal and integer values in off-diagonal components, while preserving the degrees-of-freedom. We then focus on coding schemes for the new H in which each transmitter is restricted to signaling over the scalar lattice zP1/4+ǫ : z Z intersected { ∈ } with the interval [ P1/2,P1/2]. The idea is that the integer valued cross gains guarantee that the − interfering signal values at each receiver will also be confined to this scalar lattice (though may fall outside of the P1/2 interval), while the irrational direct gains place the directly transmitted signal values on a scaled lattice that “stands out” from the interfering lattice. Specifically, this scaled lattice has the property that offsetting the interfering lattice (equal to the original lattice) by each point in the scaled lattice results in disjoint sets. A non-empty intersection would imply that the direct gain could be written as the ratio of two integers, which would contradict its irrationality. An even stronger property holds for algebraic irrational direct gains: the distance between any pair of points obtained by adding a point from the scaled lattice to a point from the interfering lattice actually grows with P. This is shown to follow from a major result in number theory stating that for any irrational algebraic number α and any γ > 0, a rational p/q approximation will have an error of at least δ/q2+γ for some δ depending only on α and γ.4 The next step in the proof is to deal withthe noiseby coupling this inter-point distancegrowth with Fano’s inequality toshow that the mutual information induced between each transmitter-receiver pair by independent, uniform distributions on the original power-constrained lattices, taking interference into account, grows like (1/4 ǫ)log P, for arbitrarily small ǫ. This, in turn, implies the existence of a sequence of block − 2 codes (with symbols from the original lattice) with sum rate approaching (K/4)log P, and which 2 are correctly decodeable, with high probability, by treating interference as noise. As mentioned, we begin with an invariance property of DoF(H). Lemma 1 (Invariance property) For any matrix H and diagonal matrices D and D with t r positive diagonal components DoF(D HD )= DoF(H).5 t r Proof: Let D = diag(d ,...,d ) D = diag(d ,...,d ). In the matrix multiplication D HD , t t1 tK r r1 rK t r d scales thechannel gains from transmitter i to the differentreceivers, while d scales thechannel ti rj gains from all transmitters to receiver j. By scaling the input signals and noise variances instead of the channel gains, we can write (D HD ,1,P1) = (H,1tD−2,P1tD2). (2) C t r C r t Let dˇ = min d , dˆ = max d , dˇ = min d , dˆ = max d . Since increasing t 1≤i≤K ti t 1≤i≤K ti r 1≤i≤K ri r 1≤i≤K ri the power constraints and reducing the noise variances cannot reduce the capacity region of the 4For irrational algebraic numbers of degree two (solutions to quadratic equations with integer coefficients), such as √2, the approximation bound holds with γ = 0 and is known as Liouville’s Theorem (established in 1844). The validity of the bound for general algebraic numbers was a longstanding open problem in number theory and was finally established in 1955 byK. F. Roth,for which hewas awarded theFields Medal. 5The matrices Dt and Dr need only have non-zero diagonal components for the result to hold. We assume positivity for simplicity, as thisis all we shall require in this paper. 6 GIFC we have (H,(1/dˇ2)1,Pdˇ21) (H,1tD−2,P1tD2) (H,(1/dˆ2)1,Pdˆ21). (3) C r t ⊆ C r t ⊆ C r t Furthermore, once all the noise variances are equal, they can be normalized to 1 by scaling the power constraints, leading to (H,(1/dˇ2)1,Pdˇ21) = (H,1,Pdˇ2dˇ21) C r t C r t (H,(1/dˆ2)1,Pdˆ21) = (H,1,Pdˆ2dˆ21). (4) C r t C r t Using (2), (3) and (4) we can write maxR∈C(H,1,Pdˇ2rdˇ2t1)1tR maxR∈C(DtHDR,1,P1)1tR maxR∈C(H,1,Pdˆ2rdˆ2t1)1tR 1log P ≤ 1 log P ≤ 1log P 2 2 2 2 2 2 which can be rewritten as 21log2(Pdˇ2rdˇ2t)maxR∈C(H,1,Pdˇ2rtˇ2t1)1tR 1 log P 1log (Pdˇ2dˇ2) 2 2 2 2 r t maxR∈C(DtHDR,1,P)1tR 12log2(Pdˆ2rdˆ2t)maxR∈C(H,1,Pdˆ2rdˆ2t1)1tR. ≤ 1log P ≤ 1 log P 1 log (Pdˆ2dˆ2) 2 2 2 2 2 2 r t Taking limsup of all three terms as P implies DoF(H) DoF(D HD ) DoF(H). t r → ∞ ≤ ≤ Proof: (of Theorem 1) By Lemma 1 we can scale H (by post multiplying by an integer valued D ) r so that all off-diagonal elements are integers and all diagonal elements remain irrational algebraic. In addition, from (1) we only need to consider channels where all the inputs have the same power constraint P and all the noise processes have variance 1. For any ǫ > 0, we will present a communication scheme that achieves 1tR = (K/4 Kǫ)log P − 2 − o(log P), implying that DoF(H) K/2. Consider the scalar lattice 2 ≥ Λ = x :x = P1/4+ǫz,z Z P,ǫ { ∈ } and let = Λ [ √P,√P]. Note that P,ǫ P,ǫ C ∩ − √P = 2 +1 2P1/4−ǫ +1. (5) P,ǫ |C | P1/4+ǫ ≤ $ % The users communicate using codebooks of block length n, obtained by uniform i.i.d. sampling . Note that due to the truncation of the lattice to the interval [ √P,√P], the symbol power P,ǫ C − (x2 ) never exceeds P at any time index, and hence the average codeword power does not exceed i,t P. Each receiver decodes thesignal of its transmitter, treating the interfering signals as i.i.d. noise. With this scheme, as n we can achieve: → ∞ R = I(X ;Y )= H(X ) H(X Y ) , i= 1,...,K, i i i i i i − | 7 where X Uniform( ), Y = K h X +Z , and Z (0,1), i= 1,...,K. i ∼ CP,ǫ i j=1 ji j i i ∼ N First we note that H(X ) = log P (1 ǫ)log P +log 2. We will show that i 2|CP,ǫ| ≈ 4 − 2 2 limsupH(X Y ) 1 , i= 1,...,K, i i | ≤ P→∞ andasaresult,R = (1 ǫ)log P o(log P)canbeachieved. ItwouldthenfollowthatDoF(H) i 4− 2 − 2 ≥ K/2, and, from the upper bound of [8], that DoF(H) = K/2. We will use the following lemma to upper bound H(X Y ) for i = 1,...,K. i i | Lemma 2 Let Σ = αx+s: x ,s Λ , with α being any real, irrational, and algebraic P,ǫ P,ǫ P,ǫ { ∈ C ∈ } number. For any y Σ there exists a unique pair (x,s) Λ such that y = αx+s. In P,ǫ P,ǫ P,ǫ ∈ ∈ C × addition, if y ,y Σ , y = y , then y y > Pǫ for any given ǫ >0 and large enough P. 1 2 P,ǫ 1 2 1 2 ∈ 6 | − | Proof: Let y = αx + s with x , s Λ . To get a contradiction, assume that there P,ǫ P,ǫ ∈ C ∈ exists (x˜,s˜) Λ with (x˜,s˜) = (x,s) such that αx˜ + s˜ = y. Without loss of generality P,ǫ P,ǫ ∈ C × 6 we can assume x˜ x. Since α = 0 we have s˜ = s and x˜ > x. In addition, since by assumption ≥ 6 6 αx+s= αx˜+s˜, we have s s˜ (z z )P1/4+ǫ z z α= − = s− s˜ = s − s˜ Q x˜ x (z z )P1/4+ǫ z z ∈ x˜ x x˜ x − − − where z ,z ,z ,z Z, which contradicts the assumption of irrational α. x x˜ s s˜ ∈ To prove the second part of the lemma, let yˆ= αxˆ+sˆ, where xˆ , sˆ Λ , and yˆ Σ , with P,ǫ P,ǫ P,ǫ ∈ C ∈ ∈ yˆ= y. If sˆ= s then 6 yˆ y = αxˆ x = αz z P1/4+ǫ > Pǫ, xˆ x | − | | − | | − | where z ,z Z, as long as P is sufficiently large. Similarly, if xˆ = x and P is large enough we x xˆ ∈ have yˆ y = sˆ s = z z P1/4+ǫ >Pǫ, sˆ s | − | | − | | − | where z ,z Z. So it remains to consider the case xˆ = x and sˆ= s. Without loss of generality s sˆ ∈ 6 6 we can assume xˆ > x. To get a contradiction, we assume that yˆ y Pǫ, and write: | − | ≤ yˆ y Pǫ | − | ≤ αxˆ+sˆ αx s Pǫ | − − | ≤ Pǫ αz +z αz z | xˆ sˆ− x − s| ≤ P1/4+ǫ z z P−1/4 s sˆ α − , (6) − z z ≤ z z (cid:12) xˆ − x(cid:12) xˆ− x (cid:12) (cid:12) where z ,z ,z ,z Z. (cid:12) (cid:12) x x˜ s s˜ (cid:12) (cid:12) ∈ Ontheotherhandthereareboundsonhowwellanirrationalalgebraicnumbercanbeapproximated with a rational number. The most refined of those bounds, due to Roth, 1955, states that for any 8 irrational algebraic α, and any γ > 0, there exists δ > 0 such that p δ α > (7) − q q2+γ (cid:12) (cid:12) (cid:12) (cid:12) for all p,q Z, q > 0 [19]. (cid:12) (cid:12) ∈ (cid:12) (cid:12) Combining (6) and (7) we have δ z z P−1/4 s sˆ < α − (z z )2+γ − z z ≤ z z xˆ − x (cid:12) xˆ− x(cid:12) xˆ− x (cid:12) (cid:12) so that (cid:12) (cid:12) (cid:12) (cid:12) (a) 1+γ 0 < δ < P−1/4(z z )1+γ P−1/4 2P1/4−ǫ +1 =21+γPγ/4−ǫ(1+γ)+o(1) (8) xˆ x − ≤ (cid:16) (cid:17) where we used (5) in step (a). But the right hand side of (8) goes to 0 as P whenever ǫ 1/4 → ∞ ≥ or γ < ǫ/(1/4 ǫ). Since we can choose any γ > 0, we can obtain a contradiction in (8) for any − ǫ > 0, for large enough P. We will use Lemma 2 to build an estimator that can identify X in Y with high probability. i i Let S , h X , and note that since h Z for j = i we have that S Λ . In addition, let i j6=i ji j ji ∈ 6 i ∈ P,ǫ Σ = h x+y :x ,y Λ ,andletv : Λ Σ bedefinedasv (x,s) = h x+s. P,ǫ,i {Pii ∈CP,ǫ ∈ P,ǫ} i CP,ǫ× P,ǫ → P,ǫ,i i ii Using Lemma 2 and the fact that h is real, algebraic and irrational we have that v is invertible, ii i i.e. there exists v−1 :Σ Λ such that v−1(v (x,s)) = (x,s) for any (x,s) Λ . i P,ǫ,i → CP,ǫ× P,ǫ i i ∈ CP,ǫ× P,ǫ Let u : R2 R be defined as u(x,s) = x, and let Xˆ = u(v−1(argmin x Y )). We have → i i x∈ΣP,ǫ,i| − i| Xˆ = X whenever Y is closer to some other point in Σ than it is to h X +S . From Lemma i i i P,ǫ,i ii i i 6 2, this can only occur if Z Pǫ/2 for large enough P. It follows that i | | ≥ Pǫ Pǫ P2ǫ Pr(Xˆ = X ) Pr Z = 2Q 2exp , i i i N(0,1) 6 ≤ | | ≥ 2 2 ≤ − 8 (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) where Q (x) is the probability that a Gaussian random variable with zero mean and variance N(0,1) one exceeds x. Using the data processing and Fano’s inequalities we obtain H(X Y ) H(X Xˆ ) i i i i | ≤ | 1+Pr(Xˆ = X )log( ) i i P,ǫ ≤ 6 |C | P2ǫ 1 1+2exp ǫ log P +log 2+o(1) (9) ≤ − 8 4 − 2 2 (cid:18) (cid:19)(cid:20)(cid:18) (cid:19) (cid:21) which goes to 1 as P . → ∞ 4 Degrees-of-freedom for rational H In this section, we give the proof of Theorem 2, establishing that the degrees-of-freedom of any fully connected, real, scalar GIFCs is bounded strictly below K/2, for K > 2. As in the previous section, we begin with a sketch of the proof. 9 Most of the work in the proof is to establish the theorem for K = 3 users. The theorem for K > 3 will then follow from an extension of the averaging argument of [8], used therein to obtain the K/2 degrees-of-freedom upper bound from a bound of 1 on the degrees-of-freedom for K = 2. In this case, the K = 3 bound is averaged over all three-tuples of users (transmitters and corresponding receivers), as opposed to pairs of users in [8]. Given a K = 3 user GIFC with fully connected, rational H, using Lemma 1 (invariance property) and eliminating cross links, we can upper bound DoF(H) by DoF(H˜) where 1 0 0 H˜ = h˜ = 1 p 0 , ij   1 q 1 (cid:2) (cid:3)     where p and q are integers (see Figure 2 in Subsection 4.3). The main step in the proof of the overall theorem is establishing that DoF(H˜) < 3/2, which is formally carried out in Lemma 11 below, and proceeds as follows. First, it is shown (Lemma 4) that a deterministic channel obtained by eliminating all noise sources and restricting the power-constrained codewords to have integer valued symbols results in at most a power-constraint-independent loss in the achievable sum rate. Therefore, the degrees-of-freedom (according to the obvious generalization) of this deterministic interference channel (IFC) is no smaller than DoF(H˜). Next, it is shown using a Fano’s inequality based argument that if the degrees-of-freedom of the deterministic IFC is at least 3/2 there would exist finite sets of n-dimensional integer valued vectors and such that the corresponding 2 3 X X independent random variables xn and xn, uniformly distributed on these sets, induce discrete 2 3 entropies satisfying H(xn) n(1/4)log P, H(xn) n(1/4)log P, H(xn + xn) n((1/4) + 2 ≈ 2 3 ≈ 2 2 3 ≈ ǫ)log P, and H(p xn+q xn) n((1/2) ǫ)log P, for the integers p and q defining the channel 2 · 2 · 3 ≈ − 2 and ǫ arbitrarily small. These entropy relations suggest that the cardinality of the support of p xn + q xn is much larger than that of xn +xn. However, tools from additive combinatorics · 2 · 3 2 3 (throughLemma7)canbeusedtoshowthatthisisimpossibleforintegervaluedpandq,leadingto a contradiction, and thereby implying that the deterministic channel must have degrees-of-freedom strictly smaller than 3/2. Unfortunately, the link between the entropy and the cardinality of the supportof a sum of independent, uniformly distributed random variables is sufficiently weak that a somewhatmoreinvolved argument(incorporatingLemma10andTheorem3)isultimately required to reach the above conclusions. The overall intuition behind the proof, however, is as outlined. Therestofthesectionisorganizedasfollows. Subsections4.1and4.2respectivelycollectsupporting results of an information theoretic nature and results from additive combinatorics. The proof of the main lemma on the K = 3 user IFC is presented in Subsection 4.3. Finally, the extension to K > 3 is presented in Subsection 4.4. Throughout, the capacity region of a K-user GIFC will be taken as in Definition 1. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.