ebook img

The Unbounded Benefit of Encoder Cooperation for the $k$-user MAC PDF

0.68 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Unbounded Benefit of Encoder Cooperation for the $k$-user MAC

The Unbounded Benefit of Encoder Cooperation for the k-user MAC Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE Abstract 6 1 Cooperation strategies allow communication devices to work together to improve network capacity. Consider a 0 2 network consisting of a k-user multiple access channel (MAC) and a node that is connected to all k encoders via p rate-limited bidirectional links, referred to as the “cooperation facilitator” (CF). Define the cooperation benefit as e the sum-capacity gain resulting from the communication between the encoders and the CF and the cooperation rate S as the total rate the CF shares with the encoders. This work demonstrates the existence of a class of k-user MACs 0 3 where the ratio of the cooperation benefit to cooperation rate tends to infinity as the cooperation rate tends to zero. ExamplesofchannelsinthisclassincludethebinaryerasureMACfork=2andthek-userGaussianMACforany ] T k≥2. I . s c Index Terms [ 2 Conferencing encoders, cooperation facilitator, cost constraints, edge removal problem, multiple access channel, v multivariate covering lemma, network information theory. 3 1 1 6 I. INTRODUCTION 0 . 1 Inlargenetworks,resourcesmaynotalwaysbedistributedevenlyacrossthenetwork.Theremaybetimeswhere 0 6 parts of a network are underutilized, while others are overconstrained, leading to suboptimal performance. In such 1 situations, end users are not able to use their devices to their full capabilities. : v i One approach to address this problem allows some nodes in the network to “cooperate,” that is, work together, X r either directly or indirectly, to achieve common goals. The model we next introduce is based on this idea. a In the classical k-user multiple access channel (MAC) [3], there are k encoders and a single decoder. Each encoder has a private message which it transmits over n channel uses to the decoder. The decoder, once it receives n output symbols, finds the messages of all k encoders with small average probability of error. In this model, the encoders cannot cooperate, since each encoder only has access to its own message. Thispaperwaspresentedinpartatthe2015IEEEInternationalSymposiumofInformationTheory(ISIT)inHongKong[1]andthe2016 IEEEISITinBarcelona,Spain[2]. ThismaterialisbaseduponworksupportedbytheNationalScienceFoundationunderGrantNumbers15727524,1526771,and1321129. P. Noorzad and M. Effros are with the California Institute of Technology, Pasadena, CA 91125 USA (emails: [email protected], [email protected]). M.LangbergiswiththeStateUniversityofNewYorkatBuffalo,Buffalo,NY14260USA(email:[email protected]). 1 Figure1. Thenetworkconsistingofak-userMACandaCF.Forj∈[k],encoderj hasaccesstomessagewj ∈[2nRj]. We now consider an alternative scenario where our k-user MAC is part of a larger network. In this network, there is a node that is connected to all k encoders and acts as a “cooperation facilitator” (CF). Specifically, for every j ∈[k],1 there is a link of capacity Cj ≥0 going from encoder j to the CF and a link of capacity Cj ≥0 in out going back. The CF helps the encoders exchange information before they transmit their codewords over the MAC. Figure 1 depicts a network consisting of a k-user MAC and a (C ,C )-CF, where C = (Cj ) and in out in in j∈[k] C =(Cj ) denote the capacities of the CF input and output links. In this figure, Xn =(Xn,...,Xn) is out out j∈[k] [k] 1 k thevectorofthechannelinputsofthek encoders,andWˆ =(Wˆ ,...,Wˆ )isthevectorofmessagereproductions [k] 1 k at the decoder. The communication between the CF and the encoders occurs over a number of rounds. In the first round of cooperation, each encoder sends a rate-limited function of its message to the CF, and the CF sends a rate-limited function of what it receives back to each encoder. Communication between the encoders and the CF may continue for a finite number of rounds, with each node potentially using information received in prior rounds to determine its next transmission. Once the communication between the CF and the encoders is done, each encoder uses its message and what it has learned through the CF to choose a codeword, which it transmits across the channel. Our main result (Theorem 3) determines a set of MACs where the benefit of encoder cooperation through a CF grows very quickly with C . Specifically, we find a class of MACs C∗, where every MAC in C∗ has the property out thatforanyfixedC ∈Rk ,thesum-capacityofthatMACwitha(C ,C )-CFhasaninfinitederivativeinthe in >0 in out direction of every v∈Rk at C =0. In other words, as a function of C , the sum-capacity grows faster than >0 out out any function with bounded derivative at C = 0. This means that for any MAC in C∗, sharing a small number out of bits with each encoder leads to a large gain in sum-capacity. An important implication of this result is the existence of a memoryless network that does not satisfy the “edge removal property” [4], [5]. A network satisfies the edge removal property if removing an edge of capacity δ > 0 changes the capacity region by at most δ in each dimension. Thus removing an edge of capacity δ from a network whichhask sourcesandsatisfiestheedgeremovalproperty,decreasessum-capacitybyatmostkδ,alinearfunction ofδ.NowconsideranetworkconsistingofaMACinC∗ anda(C ,C )-CF,whereC ∈Rk .Ourmainresult in out in >0 (Theorem 3) implies that for small C , removing all the output edges reduces sum-capacity by an amount much out 1Thenotation[x]describestheset{1,...,(cid:98)x(cid:99)}foranyrealnumberx≥1. 2 largerthank(cid:80) Cj .Thusthereexistmemorylessnetworksthatdonotsatisfytheedgeremovalproperty.The j∈[k] out first example of such a network appeared in [6]. We introduce the coding scheme that leads to Theorem 3 in Section IV. This scheme combines forwarding, coordination,andclassicalMACcoding.Inforwarding,eachencodersendspartofitsmessagetoallotherencoders bypassingthatinformationthroughtheCF.2 Whenk =2,forwardingisequivalenttoasingleroundofconferencing as described in [8]. The coordination strategy is a modified version of Marton’s coding scheme for the broadcast channel [9], [10]. To implement this strategy, the CF shares information with the encoders that enables them to transmitcodewordsthatarejointlytypicalwithrespecttoadependentdistribution;thisisprovenusingamultivariate version of the covering lemma [11, p. 218]. The multivariate covering lemma is stated for strongly typical sets in [11]. In Appendix A, using the proof of the 2-user case from [11] and techniques from [12], we prove this lemma forweaklytypicalsets[13,p.251].Usingweaklytypicalsetsinourachievabilityproofallowsourresultstoextend to continuous (e.g., Gaussian) channels without the need for quantization. Finally, the classical MAC strategy is Ulrey’s [3] extension of Ahlswede’s [14], [15] and Liao’s [16] coding strategy to the k-user MAC. UsingtechniquesfromWillems[8],wederiveanouterbound(Proposition5)forthecapacityregionoftheMAC with a (C ,C )-CF. This outer bound does not capture the dependence of the capacity region on C and is in out out thus loose for some values of C . However, if the entries of C are sufficiently larger than the entries of C , out out in then our inner and outer bounds agree and we obtain the capacity region (Corollary 6). InSectionV,weapplyourresultstothe2-userGaussianMACwithaCFthathasaccesstothemessagesofboth encodersandhaslinksofoutputcapacityC .WeshowthatforsmallC ,theachievablesum-rateapproximately out out √ equals a constant times C . A similar approximation holds for a weighted version of the sum-rate as well, as out we see in Proposition 7. This result implies that at least for the 2-user Gaussian MAC, the benefit of cooperation is not limited to sum-capacity and applies to other capacity region metrics as well. In Section VI, we consider the extension of Willems’ conferencing model [8] from 2 to k users. A special case of this model with k =3 is studied in [17] for the Gaussian MAC. While the authors of [17] use two conferencing rounds in their achievability result, it is not clear from [17] if there is a benefit in using two rounds instead of one, and if so, how large that benefit is. Here we explicitly show that a single conferencing round is not optimal for k ≥ 3, even though it is known to be optimal when k = 2 [8]. Finally, we apply our outer bound for the k-user MAC with a CF to obtain an outer bound for the k-user MAC with conferencing. The resulting outer bound is tight when k =2. In the next section, we formally define the capacity region of the network consisting of a k-user MAC and a CF. II. MODEL Consider a network with k encoders, a CF, a k-user MAC, and a decoder (Figure 1). For each j ∈[k], encoder j communicates with the CF using noiseless links of capacities Cj ≥0 and Cj ≥0 going to and from the CF, in out 2While it is possible to consider encoders that send different parts of their messages to different encoders using Han’s result for the MAC withcorrelatedsources[7],weavoidthesecasesforsimplicity. 3 respectively. The k encoders communicate with the decoder through a MAC (X ,p(y|x ),Y), where [k] [k] k (cid:89) X = X , [k] j j=1 and an element of X is denoted by x . We say a MAC is discrete if X and Y are either finite or countably [k] [k] [k] infinite, and p(y|x ) is a probability mass function on Y for every x ∈ X . We say a MAC is continuous if [k] [k] [k] X =Rk,Y =R,andp(y|x )isaprobabilitydensityfunctiononY forallx .Inaddition,weassumethatour [k] [k] [k] channel is memoryless and without feedback [13, p. 193], so that for every positive integer n, the nth extension channel of our MAC is given by p(yn|xn ), where [k] n (cid:89) ∀(xn ,yn)∈Xn ×Yn :p(yn|xn )= p(y |x ). [k] [k] [k] t [k]t t=1 An example of a continuous MAC is the k-user Gaussian MAC with noise variance N >0, where 1 (cid:104) 1 (cid:16) (cid:88) (cid:17)2(cid:105) p(y|x )= √ exp − y− x (1) [k] 2πN 2N j j∈[k] Henceforth, all MACs are memoryless and without feedback, and either discrete or continuous. We next describe a (cid:0)(2nR1,...,2nRk),n,L(cid:1)-code for the MAC (X ,p(y|x ),Y) with a (C ,C )-CF with cost functions (b ) and cost constraint vector [k] [k] in out j j∈[k] B=(B ) ∈Rk . For each j ∈[k], cost function b is a fixed mapping from X to R . Each encoder j ∈[k] j j∈[k] ≥0 j j ≥0 wishes to transmit a message wj ∈ [2nRj] to the decoder. This is accomplished by first exchanging information with the CF and then transmitting across the MAC. Communication with the CF occurs in L rounds. For each j ∈ [k] and (cid:96) ∈ [L], sets U and V , respectively, describe the alphabets of symbols that encoder j can send to j(cid:96) j(cid:96) and receive from the CF in round (cid:96). These alphabets satisfy the link capacity constraints L (cid:88) log|U |≤nCj j(cid:96) in (cid:96)=1 L (cid:88) log|V |≤nCj . (2) j(cid:96) out (cid:96)=1 The operation of encoder j and the CF, respectively, in round (cid:96) are given by ϕ :[2nRj]×V(cid:96)−1 →U j(cid:96) j j(cid:96) k (cid:89) ψ : U(cid:96) →V . j(cid:96) i j(cid:96) i=1 where U(cid:96) =(cid:81)(cid:96) U and V(cid:96) =(cid:81)(cid:96) V . After its exchange with the CF, encoder j applies a function j (cid:96)(cid:48)=1 j(cid:96)(cid:48) j (cid:96)(cid:48)=1 j(cid:96)(cid:48) f :[2nRj]×VL →Xn, j j j to choose a codeword, which it transmits across the channel. In addition, every xn in the range of f satisfies j j n (cid:88) b (x )≤nB . j jt j t=1 4 The decoder receives channel output Yn and applies k (cid:89) g :Yn → [2nRj] j=1 to obtain estimate Wˆ of the message vector w . [k] [k] The encoders, CF, and decoder together define a (cid:0)(2nR1,...,2nRk),n,L(cid:1)-code. The average error probability of the code is P(n) = Pr(cid:8)g(Yn) (cid:54)= W (cid:9), where W is the transmitted message e [k] [k] vector and is uniformly distributed on (cid:81)kj=1[2nRj]. A rate vector R[k] =(R1,...,Rk) is achievable if there exists a sequence of (cid:0)(2nR1,...,2nRk),n,L(cid:1) codes with Pe(n) → 0 as n → ∞. The capacity region, C(Cin,Cout), is defined as the closure of the set of all achievable rate vectors. III. RESULTS Inthis section,we describethekeyresults.In SubsectionIII-A, wepresent ourinnerbound. InSubsection III-B, we state our main result, which proves the existence of a class of MACs with large cooperation gain. Finally, in Subsection III-C, we discuss our outer bound. A. Inner Bound Using the coding scheme we introduce in Section IV, we obtain an inner bound for the capacity region of the k-user MAC with a (C ,C )-CF. The following definitions are useful for describing that bound. Choose vectors in out C =(C )k and C =(C )k in Rk such that for all j ∈[k], 0 j0 j=1 d jd j=1 ≥0 C ≤Cj (3) j0 in (cid:88) C + C ≤Cj . (4) jd i0 out i(cid:54)=j HereC isthenumberofbitsperchanneluseencoderj sendsdirectlytotheotherencodersviatheCFandC is j0 jd the number of bits per channel use the CF transmits to encoder j to implement the coordination strategy. Subscript (cid:8) (cid:9) “d” in C alludes to the dependence created through coordination. Let S = j ∈ [k] : C (cid:54)= 0 be the set of jd d jd encoders that participate in this dependence. Fix alphabets U ,U ,...,U . For every nonempty S ⊆[k], let U be the set of all u =(u ) where u ∈U 0 1 k S S j j∈S j j forallj ∈S.DefinethesetX similarly.LetP(U ,U ,X ,S )bethesetofalldistributionsonU ×U ×X S 0 [k] [k] d 0 [k] [k] that are of the form (cid:89) (cid:89) p(u0)· p(ui|u0)·p(uSd|u0,uSdc)· p(xj|u0,uj), (5) i∈Sc j∈[k] d satisfy the dependence constraints3 (cid:88) (cid:88) ζ := C − H(U |U )+H(U |U ,U )>0 ∀∅(cid:40)S ⊆S , S jd j 0 S 0 Sc d d j∈S j∈S 3TheconstraintonζS isimposedbythemultivariatecoveringlemma(AppendixA),whichweuseintheproofofourinnerbound. 5 and cost constraints E(cid:2)b (X )(cid:3)≤B ∀j ∈[k]. (6) j j j Here U encodes the “common message,” which, for every j ∈ [k], contains nC bits from the message of 0 j0 encoder j and is shared with all other encoders through the CF; each random variable U captures the information j encoder j receives from the CF to create dependence with the codewords of other encoders. The random variable X represents the symbol encoder j transmits over the channel. j For any C ,C ∈ Rk satisfying (3) and (4) and any p ∈ P(U ,U ,X ,S ), let R(C ,C ,p) be the set of 0 d ≥0 0 [k] [k] d 0 d all (R ,...,R ) for which 1 k (cid:88) R <I(X ;Y)−ζ , (7) j [k] Sd j∈[k] and for every S,T ⊆[k], (cid:88) (cid:88) (R −C )++ (R −Cj )+ j j0 j in j∈A j∈B∩T (cid:0) (cid:12) (cid:1) <I UA,XA∪(B∩T);Y(cid:12)U0,UB,XB\T −ζ(A∪B)∩Sd (8) holds for some sets A and B for which S∩Sc ⊆A⊆S and Sc∩Sc ⊆B ⊆Sc. d d We next state our inner bound for the k-user MAC with encoder cooperation via a CF. The coding strategy that achieves this inner bound uses only a single round of cooperation (L=1). The proof is given in Subsection VII-A. Theorem 1 (Inner Bound). For any MAC (X ,p(y|x ),Y) with a (C ,C )-CF, [k] [k] in out (cid:91) C(C ,C )⊇ R(C ,C ,p) in out 0 d where A¯ denotes the closure of set A and the union is over all C and C satisfying (3) and (4), and p ∈ 0 d P(U ,U ,X ,S ). 0 [k] [k] d The achievable region given in Theorem 1 is convex and thus we do not require the convex hull operation. The proof is similar to [1], [18] and is omitted. ThenextcorollarytreatsthecasewheretheCFtransmitsthebitsitreceivesfromeachencodertoallotherencoders without change. In this case, our coding strategy simply combines forwarding with classical MAC encoding. We obtainthisresultfromTheorem1bysettingC =0and|U |=1forallj ∈[k]andchoosingA=S andB =Sc jd j (cid:81) for every S,T ⊆[k]. In Corollary 2, P (U ,X ) is the set of all distributions p(u ) p(x |u ) that satisfy ind 0 [k] 0 j∈[k] j 0 the cost constraints (6). Corollary 2 (Forwarding Inner Bound). The capacity region of any MAC with a (C ,C )-CF contains the set in out of all rate vectors that for some constants (C ) (satisfying (3) and (4) with C = 0 for all j) and some j0 j∈[k] jd distribution p∈P (U ,X ), satisfy ind 0 [k] (cid:88) (cid:0) (cid:88) R <I X ;Y|U ,X )+ C ∀∅=(cid:54) S ⊆[k] j S 0 Sc j0 j∈S j∈S (cid:88) R <I(X ;Y). j [k] j∈[k] 6 B. Sum-Capacity Gain We wish to understand when cooperation leads to a benefit that exceeds the resources employed to enable it. Therefore, we compare the gain in sum-capacity obtained through cooperation to the number of bits shared with the encoders to enable that gain. For any k-user MAC with a (C ,C )-CF, define the sum-capacity as in out k (cid:88) C (C ,C )= max R . sum in out j C(Cin,Cout)j=1 For a fixed C ∈Rk , define the “sum-capacity gain” G:Rk →R as in ≥0 ≥0 ≥0 G(C )=C (C ,C )−C (C ,0), out sum in out sum in where C =(Cj )k and 0=(0,...,0). Note that regardless of C , it follows from (2) that no cooperation out out j=1 in is possible when C =0. Thus out C (C ,0)=C (0,0)= max I(X ;Y), sum in sum [k] p∈Pind(X[k]) where P (X ) is the set of all independent distributions ind [k] (cid:89) p(x )= p(x ) [k] j j∈[k] on X that satisfy the cost constraints (6). Similarly, P(X ) is the set of all distributions on X that satisfy (6). [k] [k] [k] For sets X ,...,X ,Y, cost functions (b ) , and cost constraints (B ) , we next define a special class 1 k j j∈[k] j j∈[k] of MACs C∗(X ,Y). We say a MAC (X ,p(y|x ),Y) is in C∗(X ,Y), if there exists p ∈ P (X ) that [k] [k] [k] [k] ind ind [k] satisfies I (X ;Y)= max I(X ;Y), ind [k] [k] p∈Pind(X[k]) and p ∈P(X ) whose support is contained in the support of p and satisfies dep [k] ind (cid:0) (cid:1) I (X ;Y)+D p (y)(cid:107)p (y) >I (X ;Y). (9) dep [k] dep ind ind [k] In the above equation, p (y) and p (y) are the output distributions corresponding to the input distributions dep ind p (x ) and p (x ), respectively. We remark that (9) is equivalent to dep [k] ind [k] E (cid:104)D(cid:0)p(y|X )(cid:107)p (y)(cid:1)(cid:105)>E (cid:104)D(cid:0)p(y|X )(cid:107)p (y)(cid:1)(cid:105), dep [k] ind ind [k] ind where the expectations are with respect to p (x ) and p (x ), respectively. dep [k] ind [k] Using these definitions, we state our main result which captures a family of MACs for which the slope of the gain function is infinite in every direction at C =0. In this statement, for any unit vector v∈Rk , D G is the out ≥0 v directional derivative of G in the direction of v. The proof appears in Subsection VII-B. Theorem 3 (Sum-capacity). Let (X ,p(y|x ),Y) be a MAC in C∗(X ,Y) and C ∈ Rk . Then for any unit [k] [k] [k] in >0 vector v∈Rk , >0 (D G)(0)=∞. v 7 NotethatforcontinuousMACs,whenforj ∈[k]andx∈R,b (x)=x2,costconstraintsarereferredtoaspower j constraints. In addition, for every j ∈ [k], the variable P is commonly used instead of B . Our next proposition j j provides necessary and sufficient conditions under which the k-user Gaussian MAC with power constraints is in C∗(Rk,R). The proof is provided in Subsection VII-C. Proposition 4. The k-user Gaussian MAC with power constraint vector P = (P ) ∈ Rk is in C∗(Rk,R) if j j∈[k] ≥0 and only if at least two entries of P are positive. C. Outer Bound We next describe our outer bound. While we only make use of a single round of cooperation in our inner bound (Theorem 1), the outer bound applies to all coding schemes regardless of the number of rounds. Proposition 5 (Outer Bound). For the MAC (X ,p(y|x ),Y), C(C ,C ) is a subset of the set of all rate [k] [k] in out vectors that for some distribution p∈P (U ,X ) satisfy ind 0 [k] (cid:88)R ≤I(cid:0)X ;Y|U ,X (cid:1)+(cid:88)Cj ∀∅(cid:54)=S ⊆[k] (10) j S 0 Sc in j∈S j∈S (cid:88) R ≤I(X ;Y). (11) j [k] j∈[k] The proof of this proposition is given in Subsection VII-D. Our proof uses ideas similar to the proof of the converse for the 2-user MAC with conferencing [8]. If the capacities of the CF output links are sufficiently large, our inner and outer bounds coincide and we obtain the capacity region. This follows by setting C =Cj for all j ∈[k] in our forwarding inner bound (Corollary 2) j0 in and comparing it with the outer bound given in Proposition 5. Corollary 6. For the MAC (X ,p(y|x ),Y) with a (C ,C )-CF, if [k] [k] in out (cid:88) ∀j ∈[k]:Cj ≥ Ci , out in i:i(cid:54)=j then our inner and outer bounds agree. IV. THECODINGSCHEME Choosenonnegativeconstants(C )k and(C )k suchthat(3)and(4)holdforallj ∈[k].Fixadistribution j0 j=1 jd j=1 p∈P(U ,U ,X ,S ) and constants (cid:15),δ >0. Let 0 [k] [k] d R =min{R ,C } j0 j j0 R =min{R ,Cj }−R jd j in j0 R =R −R −R =(R −Cj )+, jj j j0 jd j in where x+ = max{x,0} for any real number x. For every j ∈ [k], split the message of encoder j as w = j (wj0,wjd,wjj), where wj0 ∈ [2nRj0], wjd ∈ [2nRjd], wjj ∈ [2nRjj]. For all j ∈ [k], encoder j sends (wj0,wjd) 8 noiselessly to the CF. This is possible, since R +R is less than or equal to Cj . The CF sends w to all other j0 jd in j0 encoders via its output links and uses w to implement the coordination strategy to be descibed below. Due to jd the CF rate constraints, encoder j cannot share the remaining part of its message, w , with the CF. Instead, it jj transmits w over the channel using the classical MAC strategy. jj Let W0 =(cid:81)kj=1[2nRj0]. For every w0 ∈W0, let U0n(w0) be drawn independently according to n Pr(cid:8)Un(w )=un(cid:9)=(cid:89)p(u ). 0 0 0 0t t=1 GivenU0n(w0)=un0,foreveryj ∈[k],wjd ∈[2nRjd],andzj ∈[2nCjd],letUjn(wjd,zj|un0)bedrawnindependently according to (cid:110) (cid:12) (cid:111) (cid:89)n Pr Un(w ,z |un)=un(cid:12)Un(w )=un = p(u |u ). (12) j jd j 0 j(cid:12) 0 0 0 jt 0t t=1 For every (w ,...,w ), define E(un,µ ,...,µ ) as the event where Un(w )=un and for every j ∈[k], 1 k 0 1 k 0 0 0 Un(w ,·|un)=µ (·), (13) j jd 0 j where µj is a mapping from [2nCjd] to Ujn. Let A(un0,µ[k]) be the set of all z[k] =(z1,...,zk) such that (cid:0)un,µ (z )(cid:1)∈A(n)(U ,U ), (14) 0 [k] [k] δ 0 [k] whereµ (z )=(µ (z ),...,µ (z ))andA(n)(U ,U )istheweaklytypicalsetwithrespecttothedistribution [k] [k] 1 1 k k δ 0 [k] p(u ,u ). If A(un,µ ) is empty, set Z = 1 for all j ∈ [k]. Otherwise, let the k-tuple Z = (Z ,...,Z ) 0 [k] 0 [k] j [k] 1 k be the smallest element of A(un,µ ) with respect to the lexicographical order. Finally, given Un(w ) = un 0 [k] 0 0 0 and Ujn(wjd,Zj|un0) = unj, for each wjj ∈ [2nRjj], let Xjn(wjj|un0,unj) be a random vector drawn independently according to (cid:110) (cid:12) (cid:111) Pr Xn(w |un,un)=xn(cid:12)Un(w )=un,Un(w ,Z )=un j jj 0 j j(cid:12) 0 0 0 j jd j j n (cid:89) = p(x |u ,u ). jt 0t jt j=1 We next describe the encoding and decoding processes. Encoding. For every j ∈[k], encoder j sends the pair (w ,w ) to the CF. The CF sends ((w ) ,Z ) back j0 jd i0 i(cid:54)=j j to encoder j. Encoder j, having access to w =(w ) and Z , transmits Xn(w |Un(w ),Un(w ,Z )) over the 0 j0 j j j jj 0 0 j jd j channel. Decoding. The decoder, upon receiving Yn, maps Yn to the unique k-tuple Wˆ such that [k] (cid:16)Un(Wˆ ),(cid:0)Un(Wˆ ,Zˆ |Un)(cid:1) ,(cid:0)Xn(Wˆ |Un,Un)(cid:1) ,Yn(cid:17) 0 0 j jd j 0 j j jj 0 j j ∈A(n)(U ,U ,X ,Y). (15) (cid:15) 0 [k] [k] If such a k-tuple does not exist, the decoder sets its output to the k-tuple (1,1,...,1). The analysis of the expected error probability for the proposed random code appears in Subsection VII-A. 9 V. CASESTUDY:2-USERGAUSSIANMAC In this section, we study the network consisting of the 2-user Gaussian MAC with power constraints and a CF whose input link capacities are sufficiently large so that the CF has full access to the messages and output link capacities both equal C . We show that in this scenario, the benefit of cooperation extends beyond sum-capacity; out that is, capacity metrics other than sum-capacity also exhibit an infinite slope at C = 0. In addition, we show out √ thatthebehaviorofthesemetrics(includingsum-capacity)isboundedfrombelowbyaconstantmultiplied C . out From Theorem 1, it follows that the capacity region of our network contains the set of all rate pairs (R ,R ) 1 2 that satisfy R ≤max{I(X ;Y|U )−C ,I(X ;Y|X ,U )−ζ}+C 1 1 0 1d 1 2 0 10 R ≤max{I(X ;Y|U )−C ,I(X ;Y|X ,U )−ζ}+C 2 2 0 2d 2 1 0 20 R +R ≤I(X ,X ;Y|U )−ζ+C +C 1 2 1 2 0 10 20 R +R ≤I(X ,X ;Y)−ζ 1 2 1 2 for some nonnegative constants C ,C ≤C , 1d 2d out C =C −C 10 out 2d C =C −C , 20 out 1d and some distribution p(u )p(x ,x |u ) that satisfies E[X2]≤P for i∈{1,2} and 0 1 2 0 i i ζ :=C +C −I(X ;X |U )≥0. 1d 2d 1 2 0 By (1), the 2-user Gaussian MAC can be represented as Y =X +X +Z, 1 2 where Z is independent of (X ,X ), and is distributed as Z ∼ N(0,N) for some noise variance N > 0. Let 1 2 U ∼N(0,1), and (X(cid:48),X(cid:48)) be a pair of random variables independent of U and jointly distributed as N(µ,Σ), 0 1 2 0 where     0 1 ρ 0 µ= ,Σ=  0 ρ 1 0 for some ρ ∈[0,1]. Finally, for i∈{1,2}, set 0 1 (cid:113) √ X =ρ X(cid:48)+ 1−ρ2U , P i i i i 0 i for some ρ ∈ [0,1]. Calculating the region described above for the Gaussian MAC using the joint distribution i √ of (U ,X ,X ) and setting γ = P /N for i ∈ {1,2} and γ¯ = γ γ , gives the set of all rate pairs (R ,R ) 0 1 2 i i 1 2 1 2 satisfying R ≤max(cid:26)1log1+ρ21γ1+ρ22γ2+2ρ0ρ1ρ2γ¯ −C ,1log(cid:0)1+(1−ρ2)ρ2γ (cid:1)−ζ(cid:27)+C 1 2 1+(1−ρ2)ρ2γ 1d 2 0 1 1 10 0 2 2 R ≤max(cid:26)1log1+ρ21γ1+ρ22γ2+2ρ0ρ1ρ2γ¯ −C ,1log(cid:0)1+(1−ρ2)ρ2γ (cid:1)−ζ(cid:27)+C 2 2 1+(1−ρ2)ρ2γ 2d 2 0 2 2 20 0 1 1

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.