ebook img

Capacity Bounds for a Class of Diamond Networks PDF

0.11 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Capacity Bounds for a Class of Diamond Networks

Capacity Bounds for a Class of Diamond Networks Shirin Saeedi Bidokhti Gerhard Kramer Lehrstuhl fu¨r Nachrichtentechnik Lehrstuhl fu¨r Nachrichtentechnik Technische Universita¨t Mu¨nchen, Germany Technische Universita¨t Mu¨nchen, Germany [email protected] [email protected] broAabdsctraasctt—comApcolansesnotfisdimamodoenlldednebtywotwrkosianrdeepsteunddieendtwbhite-rpeiptehse. Relay1 X1n Nimepwrouvpepperreavnioduslowboubnodusn.dTshaereudpepreirvebdouonndthies cinaptahceityforwmhicohf W V2n MAC Yn Wˆ a max-min problem, where the maximization is over a coding Source Encoder Decoder Sink p(y|x1,x2) 4 dTfoihrsterthipberuoGtoifoanutescasihnandniqtmhuueelgtmiepinnleeirmdaeilziszacetrsiiopbntoiouisnndopvirneogrbltaeenmchan(u1iqx9iu8li1eas)r,oyafncOdhzaKanranonewlg. V1n Relay2 X2n 1 and Liu for the Gaussian diamond network (2011). The bounds 0 areevaluatedforaGaussianmultipleaccesschannel(MAC)and Fig. 1: Problem setup 2 thebinaryadderMAC,andthecapacity isfoundforinteresting ranges of the bit-pipe capacities. n a I. INTRODUCTION techniqueof[4]. Ourupperboundappliesto thegeneralclass J 3 The diamond network was introduced in [1] as a simple of discrete memoryless MACs, and strictly improves the cut- 2 multi-hopnetworkwithbroadcastandmultipleaccesschannel set bound. In Section IV, we improve the achievable rates (MAC) components. The two-relay diamond network models of [3] by communicating a common piece of information ] thescenariowhereasourcecommunicateswithasinkthrough fromthe sourceto bothrelaysusingsuperpositioncodingand T two relay nodes that do not have information of their own to Marton’s coding. Finally, we study our bounds for networks I . communicate. The underlying challenge may be described as with a Gaussian MAC (Section V) and a binary adder MAC s c follows. In order to fully utilize the MAC to the receiver, we (SectionVI).Forbothexamples,wefindconditionsonthebit- [ would ideally like full cooperation between the relay nodes. pipe capacities such that the upper and lower bounds meet. 1 On the other hand, in order to communicate the maximum II. PRELIMINARIES v amount of information and better use the diversity that is 5 offered by the relays, we would like to send independent A. Notation 3 informationtotherelaynodesoverthebroadcastchannel.The We use standard notation for random variables, e.g., X, 1 6 problemof findingthe capacityof this networkis unresolved. probabilities, e.g., pX(x) or p(x), entropies, e.g., H(X) and . Lower and upper boundson the capacity are given in [1]. We H(X Y), and mutual information, e.g., I(X;Y). We denote 1 | remark that the problem is solved over linear deterministic the sequence X ,...,X by Xn. Sets are denoted by script 0 1 n 4 relay networks, and the capacity of Gaussian relay networks letters and matrices are denoted by bold capital letters. 1 has been approximated within a constant number of bits [2]. : In this paper, we study a class of diamond networks where B. Model v i the broadcast channel is modelled by two independent bit- Consider the diamondnetworkin Fig. 1. A sourcecommu- X pipes. This problem was initially studied in [3] where lower nicatesamessageofrateRtoasink.Thesourceisconnected r and upper bounds were derived on the capacity. The best to two relays via noiseless bit-pipes of capacities C and C , a 1 2 known upper bound for the problem is the cut-set bound, and the relays communicate with the receiver over a MAC. which does not fully capture the tradeoff between cooper- The source encodes a message W with nR bits into ation and diversity. Recently, [4] studied the network with sequence Vn, which is available at encoder 1, and sequence 1 a Gaussian MAC and proved a new upper bound which Vn, whichis availableatencoder2.Vn andVn are suchthat 2 1 2 constrains the mutual information between the MAC inputs H(Vn) nC and H(Vn) nC . Each relay i, i = 1,2, 1 ≤ 1 2 ≤ 2 to improve the cut-set bound. The bounding technique in maps its received sequence Vn into a sequence Xn which i i [4] is motivated by [5] that treats the Gaussian multiple is sent over the MAC. The MAC is characterized by its description problem. Unfortunately, neither result seems to input alphabets and , output alphabet , and transition 1 2 X X Y apply to discrete memoryless channels. probabilitiesp(y x ,x ),foreachx ,x .Fromthe 1 2 1 1 2 2 This paper is organized as follows. We state the problem receivedsequenc|e,thesinkdecodesa∈nXestimat∈eWXˆ ofW. We setup in Section II. In Section III, we prove a new upper are interestedin characterizingthe highestrate R thatpermits bound on the achievable rates by generalizing the bounding arbitrarily small positive error probability Pr(Wˆ =W). 6 III. AN UPPERBOUND nC1+nC2+nI(X1,Q,X2,Q;YQ UQ) ≤ | The idea behind our upper bound is motivated by [4], +nI(X ;U X )+nI(X ;U X ). 1,Q Q 2,Q 2,Q Q 1,Q | | [5]. The proposed bound is applicable not only to Gaussian Step (a) follows because we have the following two Markov channels, but also to general discrete memoryless channels, chains: and it strictly improvesthe cut-set boundas we show via two examples. (XnXnUnYi 1) (X X U ) Y (8) 1 2 − − 1i 2i i − i In the network of Fig. 1, the cut-set bound gives (XnXnUi 1) (X X ) U . (9) 1 2 − − 1i 2i − i R C +C (1) 1 2 We thus have the following upper bound. ≤ R C +I(X ;Y X ) (2) ≤ 1 2 | 1 Theorem 1. The rate R is achievable only if there exists R C +I(X ;Y X ) (3) 2 1 2 a joint distribution p(x ,x ) for which inequalities (10)-(14) ≤ | 1 2 R I(X X ;Y). (4) hold for every auxiliary channel p(ux ,x ,y). ≤ 1 2 | 1 2 However, the bound in (1) disregards the potential correla- R C +C (10) 1 2 ≤ tion between the inputs of the MAC. A tighter boundon R is R C +I(X ;Y X ) (11) 2 1 2 given in the following multi-letter form: ≤ | R C +I(X ;Y X ) (12) 1 2 1 nR H(Vn,Vn) ≤ | ≤ 1 2 R I(X1X2;Y) (13) = H(Vn)+H(Vn) I(Vn;Vn) ≤ 1 1 − 1 2 2R C1+C2+I(X1X2;Y U) nC +nC I(Xn;Xn). ≤ | ≤ 1 2− 1 2 +I(X1;U X2)+I(X2;U X1) (14) | | It is observed in [4] that I(Xn;Xn) can be written in the 1 2 Remark 1. The boundof Theorem 1 is in the form of a max- following form for any random sequence Un. min problem, where the maximization is over p(x ,x ) and 1 2 I(Xn;Xn) = I(XnXn;Un) I(Xn;Un Xn) the minimization is over p(ux1,x2,y). 1 2 1 2 − 1 | 2 | I(Xn;Un Xn)+I(Xn;Xn Un) Remark 2. For a fixed auxiliary channel p(ux ,x ,y) and − 2 | 1 1 2| | 1 2 a fixed MAC p(y x ,x ), the upper bound in Theorem 1 is Therefore, we have | 1 2 concave in p(x ,x ). See [6]. 1 2 nR nC +nC I(XnXn;Un) ≤ 1 2− 1 2 Remark3. Therighthandside(RHS)of (14)maybewritten +I(X1n;Un|X2n)+I(X2n;Un|X1n). (5) as Itmaynotbeclearhowtosingle-letterizetheabovebound. C +C +I(X X ;YU) I(X ;X )+I(X ;X U). (15) 1 2 1 2 1 2 1 2 We proceed as follows. First, note that − | We study the upper bound of Theorem 1 with a Gaussian nR ≤ I(X1nX2n;Yn)≤I(X1nX2n;YnUn). (6) MAC and a binary adder MAC in Sections V and VI respec- tively. Combining inequalities (5) and (6), we have 2nR nC +nC +I(XnXn;Yn Un) IV. A LOWER-BOUND ≤ 1 2 1 2 | +I(Xn;Un Xn)+I(Xn;Un Xn). (7) The achievable scheme that we present here is based on 1 | 2 2 | 1 [3], but we further allow a common message at both relaying Define Ui from X1i,X2i,Yi through the channels nodes.Thisisdonebyratesplitting,superpositioncodingand opfUiU|X1wiXe2ihYaiv(ueit|hxe1if,oxl2loi,wyiin)g, ich=ai1n,o2f,.i.n.e,qnu.alWitiieths:this choice Marton’scodingandissummarizedinthefollowingtheorem1. i Theorem 2. The rate R is achievable if it satisfies the 2nR following conditions for some pmf p(u,x ,x ,y), where 1 2 nC +nC +I(XnXn;Yn Un) p(u,x ,x ,y) = p(u,x ,x )p(y x ,x ), and U with ≤ 1 2 1 2 | 1 2 1 2 | 1 2 ∈ U +I(Xn;Un Xn)+I(Xn;Un Xn) min 1 2 +2, +4 . 1 | 2 2 | 1 |U|≤ {|X ||X | |Y| } n C +C I(X ;X U) nC +nC + I(XnXn;Y UnYi 1) 1 2− 1 2| ≤ 1 2 1 2 i| − C2+I(X1;Y X2U)  |  + n I(X1n;UXii|=X12nUi−1)+ n I(X2n;Ui|X1nUi−1) R≤minCI12((1XC+1X+I(CX;2Y2+;)YI(|XX11XU2);Y|U)−I(X1;X2|U)) (16) i=1 i=1 1 2 (a)nCX+nC + n I(X XX;Y U ) Sketchofproof: The innerboundin Theorem2 is found 1 2 1i 2i i i using the following achievable scheme. ≤ | i=1 X n n 1 ThemethodtoderivethistheoremwasdiscussedingeneraltermsbyG. + I(X1i;Ui X2i)+ I(X2i;Ui X1i) Kramer and S. Shamai after G. Kramer presented the paper [3] at ITW in | | 2007. i=1 i=1 X X a) Codebook construction: Fix the joint pmf and p (u,x ,x 2) = p(2)(u,x ,x ). Then we have P (u,x ,x ). Let R=R +R and let the foUlloXw1Xin2g|Qchain1of 2in|equalities. 1 2 UX1X2 1 2 12 ′ R′ =R1+R2 I(X1;X2 U) δǫ. (17) fℓ(C⋆) − | − <αf (C(1))+(1 α)f (C(2)) Generate 2nR12 sequences un(m ) independently, each in ℓ − ℓ 12 2C⋆ I(X ;X UQ) an i.i.d manner according to lPU(ul). For each sequence C⋆+−I(X ;1Y X2| UQ) stdtQuoieiontqlin(uoPmaenllXnla1Py12cll)|Xe)Uy,i2)(np|gxUdiaen1ie(nr,dxplsee|e2urpn,(allexdt|)enuneadln((n)emti.dl)nyFt(,2loiyine,r,)aRme2ec1aahnccR)shhi,eQ2nxqssinueanee(qnqmnuauicnee.einn.sidcc,.eiemx.msdun1xa(n)mnmn2)(nm(a1emtnh2r1n,a21eamt2)rc,,ac1mparo)ecirc2cd(k)jocion(ro2icdgnnnoidtnRntlioyg′-- ≤minCIC212C((⋆⋆U2⋆++CX−⋆II1+IX((XX(I2X(211;X;;Y1YY;1|XQX||XX2)2|122;UUUYQQQ|U)))Q)−I(X1;X2|UQ)) typQicbal). IEnndceoxdisnugc:h1Tpoaic1ros2mbmy1umn′ic∈2at{e1m1,2e.s.s.a,g22enRW′}.=(m12,m′), ≤minC12(⋆2+C⋆I+(XI(2X;Y1X||X21;UYQ|U)Q)−I(X1;X2|UQ)) I(UX X ;Y) communicate (m ,m ) with relay 1 and (m ,m ) with 1 2 relay 2 (we assum1e2 (xn11(m12,m1),xn2(m12,m2)1)2is th2e m′th (a)f (C⋆).  jointly typical pair for m12). Relays 1 and 2 then send ≤ ℓ xn1(m12,m1) and xn2(m12,m2) over the MAC, respectively. Step (a) follows by renaming (U,Q) a U and comparing the c) Decoding: Upon receiving yn, the receiver looks for lefthandwiththelowerboundcharacterizationofTheorem2. (mˆ ,mˆ ) for which the following tuple is jointly typical: So we have a contradiction, and the proposition is proved. 12 ′ (un(mˆ ),xn(mˆ ,mˆ ),xn(mˆ ,mˆ ),yn) n. V. THEGAUSSIAN MAC 12 1 12 1 2 12 2 ∈Tǫ The output of the Gaussian MAC is given by d) Error Analysis: The error analysis is straightforward and we omit it for the sake of brevity. The above scheme Y =X1+X2+Z performs reliably if where Z (0,1), and the transmitters have average R12+R1 <C1 (18) p1owenr coEn(sX∼tr2ainN)ts PP1,.P2; i.e., n1 ni=1E(X12,i) ≤ P1 and R12+R2 <C2 (19) n Wei=s1pecial2iz,ie T≤heo2rem 1 to obtaPin an upper bound on the R<I(UX1X2;Y) (20) achPievable rate R. To simplify the bound, pick U to be a R +R I(X ;X U)<I(X X ;Y U) (21) noisy version of Y in the form of U = Y +Z , where Z 1 2 1 2 1 2 N N − | | R <I(X ;Y X ,U)+I(X ;X U) (22) is a Gaussian noise with zero mean and variance N (to be 2 2 1 1 2 | | optimized later). Inequalities (10)-(14) are upper bounded as R <I(X ;Y X ,U)+I(X ;X U). (23) 1 1 | 2 1 2| follows using maximum entropy lemmas: Conditions(17)-(23),togetherwithR 0,R 0,R 0, 1 ≥ 2 ≥ 12 ≥ R C1+C2 (24) characterize an achievable rate. Eliminating R ,R ,R , we ≤ 12 1 2 1 arrive at Theorem 2. Cardinality boundsare derived using the R C2+ log 1+P1(1 ρ2) (25) ≤ 2 − standard method via Caratheodory’s theorem [7]. R C + 1log(cid:0)1+P (1 ρ2)(cid:1) (26) 1 2 Proposition 1. The lower bound of Theorem 2 is concave in ≤ 2 − C1,C2. R 1log 1+P(cid:0)1+P2+2ρ P(cid:1)1P2 (27) ≤ 2 Proof: We prove the statement for the case of C1 = (a) C1+(cid:16)C2+12log 1+P1+pP2+2ρ(cid:17)√P1P2 CW2e=exCpr.eTsshethseamloewaerrgubmouenndthoofldTshfeoorrtehme m2oirnegteernmesralofcatshee. 2R ≤ +21log (1(+1+NN++P(cid:0)1P(11+−Pρ22+))2(ρ1√+NP1+PP2)2((11+−Nρ2)))(cid:1)!. (28) maximization problem maxp(u,x1,x2)fℓp(C,p(u,x1,x2)) and To obtain inequali(cid:16)ty (a) above, write the RHS of(cid:17)(14) as wedenoteitbyf (C).Inthisformulation,fp(C,p(u,x ,x )) ℓ ℓ 1 2 C +C +h(Y U) h(YU X X )+h(U X ) is just the minimum term on the right hand of (16). 1 2 | − | 1 2 | 1 +h(U X ) h(U X X ). The proof of the theorem is by contradiction. Suppose | 2 − | 1 2 that the lower bound is not concave in C; i.e., there exist The negative terms are easy to calculate because of the values C(1),C(2), and α, 0 α 1, such that C⋆ = Gaussian nature of the channel and the choice of U. The ≤ ≤ αC(1) + (1 α)C(2) and f (C⋆) < αf (C(1)) + (1 positive terms are bounded from above using the conditional ℓ ℓ − − α)f (C(2)). Let p(1)(u,x ,x ) (resp. p(2)(u,x ,x )) be the version of the maximum entropy lemma [8]. It remains to ℓ 1 2 1 2 probability distribution that maximizes fp(C(1),p(u,x ,x )) solve a max-min problem (max over ρ and min over N). So ℓ 1 2 (resp. fp(C(2),p(u,x ,x ))). Let p (1) = α and p (2) = the rate R is achievable only if there exists some ρ > 0 for ℓ 1 2 Q Q 1 α, and define p (u,x ,x 1) = p(1)(u,x ,x ) which for every N >0 inequalities (24)-(28) hold. − UX1X2|Q 1 2| 1 2 such that 1.15 ρ 1+ 1 1 , 1.1 ≤ 4P1P2 − 4P1P2 R q C1+C2 q ≤ R C + 1log 1+P (1 ρ2) ≤ 2 2 1 − 1.05 R C + 1log 1+P (1 ρ2) ≤ 1 2 (cid:0) 2 − (cid:1) R 1log 1+P +P +2ρ√P P ≤ 2 (cid:0)1 2 (cid:1)1 2 1 2R ≤ C1+(cid:0)C2+ 12log 1+P1+P2+2(cid:1)ρ√P1P2 R) 1log 1 , ate (0.95 or −2 (cid:16)1−ρ2(cid:17) (cid:0) (cid:1) R 1+ 1 1 ρ 1, 0.9 Lower Bound (Gaussian Dist.) 4P1P2 − 4P1P2 ≤ ≤ Lower Bound (Mixture of Two Gaussian Dist.) Rq ≤ C1+Cq2 R C + 1log 1+P (1 ρ2) New Upper Bound ≤ 2 2 1 − 0.85 Cut−Set Bound R ≤ C1+ 21log(cid:0)1+P2(1−ρ2)(cid:1) R 1log 1+P +P +2ρ√P P . ≤ 2 (cid:0)1 2 (cid:1)1 2 0.8 For a lower boun(cid:0)d, we use Theorem 2(cid:1)and choose 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 (U,X ,X )tobejointlyGaussianwithmean0andcovariance Link Capacity (C) matrix1K2 . Fig. 2 shows the upper and lower bound as UX1X2 a function of C for a symmetric network with C =C =C Fig. 2: Upper and lower bounds on R as functions of C for 1 2 and P =P =1. The dash-dotted curve in Fig. 2 shows the Gaussian MAC with P =P =1. 1 2 1 2 rates achievable using the scheme of Section IV with a joint Gaussian distribution. It is interesting to see that the obtained lower bound is not concave in C. This does not contradict We choose N to be (see [4, eqn. (21)]) Proposition 1 because Gaussian distributions are sub-optimal. The improved dashed curve shows rates that are achievable using a mixture of two Gaussian distributions. + 1 N =(cid:18) P1P2(cid:18)ρ −ρ(cid:19)−1(cid:19) . (29) P1F=orPs2ym=mPet,riwcedisapmecoinfydanertewgoimrkesowfhCerefoCr1w=hicCh2t=heClowanedr p andupperboundsmeetandthuscharacterizethecapacity.This is summarized in Theorem 3. See [6] for the proof. Let us first motivate this choice. It is easy to see that Theorem 3. Fora symmetric Gaussiandiamondnetwork, the inequality (28) is simply upper bound in Theorem 1 meets the lower bound if C ≤ 1log(1+2P), C 1log(1+4P), or 4 ≥ 2 I(X X ;Y) I(X ;X )+I(X ;X U) (30) 1 2 1 2 1 2 − | 1 1+2P(1+ρ(1)) 1 1+2P(1+ρ(2)) log C log (33) 4 1 ρ(1)2 ≤ ≤ 4 1 ρ(2)2 evaluated for the joint Gaussian distribution p(x ,x ) with 1 2 − − covariance matrix where (1+2P)+ 12P2+(1+2P)2 P ρ√P P ρ(1) = − (34) 1 1 2 . (31) 6P ρ√P P P p 1 2 2 (cid:20) (cid:21) 1 1 ρ(2) = 1+ . (35) 4P2 − 2P The choice (29) makesU satisfy the Markovchain X U r 1 X for the regime where − − For example, the upper and lower bounds match in Fig. 2 2 (where P = 1) for C 0.3962, 0.4807 C 0.6942, and ≤ ≤ ≤ C 1.1610. 1 ≥ P P ρ 1 0 (32) 1 2 ρ − − ≥ Remark 4. There is a close connectionbetween the boundof (cid:18) (cid:19) p Corollary 1 andthatof[4]. The upperboundin [4] is tighter thanCorollary1 in certainregimesofoperation.Forexample and thus minimizes the RHS of (28). Otherwise, U = Y is in Fig. 2, the upper bound of [4] matches the lower bound chosen which results in a redundant bound. The upper bound for all C 0.6942. The reason seems to be that the methods is summarized in Corollary 1. ≤ used in [4] provide a better single letterization of the upper Corollary 1. Rate R is achievable only if there exists ρ 0 bound for the Gaussian MAC. ≥ Y U 0 1−α 1.57 Cut−Set Bound α New Upper Bound 1 0 1.55 Lower Bound 2 1 12α 1 ate (R)1.53 R 2 1−α 1.51 1.49 Fig. 3: Channel p(ux ,x ,y) 1 2 | 1.47 VI. THEBINARY ADDERCHANNEL 0.73 0.75 0.77 0.79 0.81 0.83 0.85 0.87 0.89 Link Capacity (C) Consider the binary adder channel defined by = 0,1 , 1 X { } = 0,1 , = 0,1,2 , and Y = X +X . The best 2 1 2 kXnown{uppe}r bYound{for this}channel is the cut-set bound. We Fig. 4: Lower and upper bounds on R as functions of C for specialize Theorem 1 to derive a new upper bound on the the symmetric binary adder MAC. achievable rate. For simplicity, we assume C =C =C. 1 2 Definep(ux ,x ,y)tobeasymmetricchannelasshownin 1 2 Fig.3,withpa|rameterαtobeoptimized.FromTheorem1,the Corollary 2. Rate R is achievable only if there exists some rateRisachievableonlyifforsomepmfp(x ,x )inequalities p, 0 p 1, such that 1 2 ≤ ≤ 2 (10)-(14) hold for the aforementioned choice of U and for R 2C every α; i.e., we have to solve a max-min problem(max over ≤ R C+h (p) p(x ,x ), min over α). ≤ 2 (39) 1 2 R h (p)+1 p 2 In the rest of this Section we manipulate the above bound ≤ − 2R 2C+2h (p) p. 2 to formulate a simplified upperbound.First, loosen the upper ≤ − bound by looking at the min-max problem (rather than the Fig. 4 plots this bound (Corollary 2) and compares it with max-min problem). Fix α. For a fixed channel p(ux ,x ,y), the cut-set bound and the lower bound of Theorem 2 for 1 2 the upper bound is concave in p(x ,x ) (see Rema|rk 2). The different values of C. It turns out that the lower and upper 1 2 concavity,togetherwith the symmetryof the problemand the bounds meet for C .75 and C .7929. See [6] for the ≤ ≥ auxiliary channel imply the following lemma. proof. Lemma1. Forafixedα,theoptimizingpmfisthatofadoubly VII. ACKNOWLEDGMENT symmetric binary source. TheworkofS.SaeediBidokhtiwassupportedbytheSwiss NationalScienceFoundationfellowshipno.146617.Thework Using Lemma 1, p(x ,x ) may be assumed to be a doubly 1 2 of G. Kramer was supported by an Alexander von Humboldt symmetric binary source with cross-over probability p. The Professorship endowed by the German Ministry of Education bounds in (10)-(14) reduce to and Research. R 2C R ≤ C+h (p) REFERENCES 2 R ≤ h2(p)+1 p (36) [1] B. E. Schein, Distributed coordination in network information theory. ≤ − (a) PhDDissertation, MIT,2001. 2R 2C+2h2(p) p+I(X1;X2 U). [2] A. Avestimehr, S. Diggavi, and D. Tse, “Wireless network information ≤ − | flow:Adeterministicapproach,”IEEETrans.Inf.Theory,vol.57,no.4, The inequality in (a) follows from (15) and the fact that pp.1872–1905,april2011. (X X ) Y U forms a Markov chain. [3] D. Traskov and G. Kramer, “Reliable communication in networks with 1 2 − − multi-access interference,” inProc.Inf.TheoryWorkshop,2007. To obtainthe bestbound,wechooseU suchthatX U 1 [4] W. Kang and N. Liu, “The gaussian multiple access diamond channel,” − − X2 forms a Markov chain; i.e., the optimal α satisfies inProc.Int.Symp.Inf.Theory,2011. [5] L. Ozarow, “On a source-coding problem with two channels and three p⋆ 2 receivers,”BellSystemTechnicalJournal,vol.59,no.10,pp.1909–1921, α(1 α)= (37) 1980. − 2(1 p⋆) (cid:18) − (cid:19) [6] S. Saeedi Bidokhti and G. Kramer, “Capacity bounds for where p⋆ is the p which maximizes a class of diamond networks,” Jan. 2014. [Online]. Available: https://mediatum.ub.tum.de/doc/1189765/1189765.pdf min 2C,C+h (p),h (p)+1 p,2C+2h (p) p . (38) [7] A.ElGamalandY.H.Kim,Network Information Theory. Cambridge 2 2 − 2 − University Press,2011. Since(cid:8)we have p⋆ 1, the choice (37) is valid a(cid:9)nd we [8] J. Thomas, “Feedback can at most double gaussian multiple access ≤ 2 channel capacity (corresp.),” IEEE Trans. Inf. Theory, vol. 33, no. 5, therefore obtain Corollary 2. pp.711–716, 1987.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.