ebook img

On Secure Network Coding for Two Unicast Sessions PDF

0.23 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On Secure Network Coding for Two Unicast Sessions

On Secure Network Coding for Two Unicast Sessions Gaurav Kumar Agarwal, Martina Cardone, Christina Fragouli Department of Electrical Engineering University of California Los Angeles, Los Angeles, CA 90095, USA Email: {gauravagarwal, martina.cardone, christina.fragouli}@ucla.edu 6 S S 1 Abstract—Thispapercharacterizesthesecretmessagecapacity 1 2 0 of three networks where two unicast sessions share some of S1 S2 S δ1,δ1E δ2,δ2E M 2 the communication resources. Each network consists of erasure 1 y channelswithstatefeedback.Apassiveeavesdropperisassumed δ1,δ1EM δ2,δ2E δ3,δ3E δ3,δ3E towiretapanyoneofthelinks.Thecapacityachievingschemesas M Ma pawcrehollpieaovsseetdhhiesgthoreuartteergraietbesouparneerdfstohraemrneannfoucremmseu(ruliacptaetldolyaaesvdaloilnuuebaalteredspiranonggdler-ashmoorsw.sunTmhtoe- D δ3,δ3E Dδ11,δ1E δ2,δ2ED2 Dδ14,δ4E Mδ25,δ5DE2 5 rate) with respect to alternative strategies, where the network (a) (b) (c) 1 resources are time-shared among the two sessions. These results Fig. 1: The Y-network in (a), the RY-network in (b) and the representasteptowardsthesecurecapacitycharacterizationfor ] general networks. They also show that, even in configurations X-network in (c). T for which network coding does not offer benefits in absence of I security, it can become beneficial under security constraints. legitimate nodes involved in the communication sends an . s acknowledgment after each transmission; this is received by c [ I. INTRODUCTION all nodes in the network as well as by the eavesdropper. Secure network coding has well established the benefits We characterize the secret message capacity for the three 3 v of network coding for secure multicast transmission. We are networks in Fig. 1. The capacity-achieving schemes in- 7 here interested in a different type of traffic where we have volve two phases: the key-sharing phase and the message- 3 two independent unicast sessions and we seek to answer the transmission phase. In particular, first a secret key is created 1 followingquestion:whatarethebenefitsthat‘networkcoding’ between two consecutive legitimate nodes (link-by-link key 5 type operations offer? generation);then, these keys are used to encryptand transmit 0 We consider the three networks in Fig. 1, namely the Y- the message like in the one-time pad [2]. For each of the . 1 network, the Reverse Y (RY)-network and the X-network. In analyzed networks, the capacity is given as the solution of 0 the Y-network two sources (able to generate randomness at a Linear Program (LP). We also show, through numerical 6 1 infinite rate) wish to communicate two independentmessages simulations, the benefits of our schemes compared to two : to a common destination, via an intermediate node (unable to alternative strategies where (a) the two sessions are time- v generate randomness). In the RY-network one source (able to shared and (b) the shared link is time-shared among the two i X generate randomness at finite rate) aims to communicate two sessions. We prove that ‘network coding’ type operations are r independent messages to two different receivers, through an beneficial for the three networks in Fig. 1. This is because a intermediate node (unable to generate randomness). Finally, randompacketstransmittedbydifferentsourcescanbe mixed in the X-network two sources (able to generate randomness to create the key to be used on the shared link. Similarly, the at infinite rate) seek to communicate two independent mes- same set of random packets can be used to generate secret sages to two different receivers, via two intermediate nodes keys for different destinations. This result is surprising since, (unable to generate randomness). In our network model, the in absence of security considerations, network coding is not transmissions take place over orthogonal erasure channels; beneficial for the networks in Fig. 1. although this being a simplistic assumption, yet it captures RelatedWork.Thecharacterizationofthe secretcapacityfor some intrinsic properties of the wireless medium (such as its wireless networks is a long-standing open problem. Relevant lossy nature).A passive eavesdropperwiretapsany oneof the workincludes[3],wheretheauthorderivedthesecretmessage communication links, but the information about which one capacity of the wiretap channel without feedback, and [4], is not available1. Public feedback, which in [1] was shown where it was shown that secure networkcodingis optimalfor to increase the secrecy capacity, is used, i.e., each of the wireless networks with error-free and unit capacity channels. The work presented in this paper follows a line of research TheworkoftheauthorswaspartiallyfundedbyNSFunderaward1321120. which was pioneered by the authors in [5], where the secret G.K.Agarwalisalsosupported bytheGuruKrupaFellowship. capacity of the point-to-point channel was characterized and 1Thisassumptionisequivalenttohaveoneeavesdropperoneverylink,but theseeavesdroppers donotcooperate amongthemselves. expressedas the solution of an LP. In particular,the capacity- achieving scheme proposed by the authors consists of two denotetheoutputsatthelegitimatereceiverandatthepassive phases, namely the key-sharingand the message-transmission eavesdropper, respectively, on channel j at time i. phases. The same authors extended this approach to char- For the three scenarios in Fig. 1 we assume public state acterize the secret capacity of more complex networks, for feedback,i.e., eachlegitimate nodesendsan acknowledgment example: (i) the parallel channel network where a source whether the packet transmission was successful, which is seeks to securely communicate a message to a destination receivedbyallothernodesaswellasbytheeavesdropper.We through a number of independent parallel channels [6], and denote with F the feedback of the transmission on channel ji (ii) the V-networkwhere two sources, which share a common j at time i. For the X-network in Fig. 1(c) we have3 randomness, aim to convey the same message to a common destination[6].Byusingasimilarapproach,in[7]theauthors 5 Pr Y ,Z |X = Pr{Y |X }Pr{Z |X }, derived the secret message capacity of the line network when [1:5]i [1:5]i [1:5]i ji ji ji ji the eavesdropper wiretaps one channel as well as all the (cid:8) (cid:9) jY=1 channels. Recently, in [8] the authors considered a general Pr{Y |X }= 1−δj, Yji =Xji , network and designed two polynomial-time secure transmis- ji ji δj, Yji =⊥ (cid:26) sion schemes. Although the two schemes were not proven to 1−δ , Z =X Pr{Z |X }= jE ji ji , be capacity-achieving, the work in [8] represents an attempt ji ji δ , Z =⊥ jE ji (cid:26) to characterize the secrecy capacity of an arbitrary network. where with ⊥ we denote the symbol of erasure. Contrarytoalltheseworks,whereasingleunicastsessionwas considered,herewestudythecasewheretwounicastsessions We assume that the intermediate nodes are unable to gen- take place simultaneously and share part of the resources. erate private randomness4. For the Y- and the X-network the Paper Organization. Section II describes the three networks sourcesS1 andS2 cangenerateprivaterandomnessΘ1andΘ2 of interest, namely the Y-, the RY- and the X-network. Sec- atinfiniterate.Differently,fortheRY-networkthesourceScan tionIIIpresentsourmainresults,i.e.,itcharacterizesthesecret generateprivaterandomnessata finite rateD0. Themessages message capacity and it presentscomparisonswith alternative W1 and W2 consist of N1 and N2 packets, respectively, and strategies. Finally, Section IV concludes the paper. have to be reliably and securely decoded at the legitimate receiver. II. SYSTEMMODELAND MAIN RESULTS Definition 1. For the X-network in Fig. 1(c) a secure coding Notation. With [n :n ] we denote the set of integers from 1 2 scheme with parameters (N ,N ,n,ǫ) consists of 5 encoding n ton ≥n .ForanindexsetAweletY ={Y :j ∈A}; 1 2 1 2 1 A j functions f , j ∈[1:5] for each i∈[1:n] such that A\B is the set of elements that belong to A but not to B. Yi ji is a vector of length i with components (Y1,...,Yi). f W ,Θ ,Fi−1 if j ∈[1:2] We consider the three networks in Fig. 1, namely (i) the X = fji Yij−1,jYi−A1,Fi−1 if j =3 , Y-network in Fig. 1(a) where two sources S and S aim to ji  ji(cid:0) 1 2 (cid:1)A communicatetwoindependentmessagesW a1ndW t2oacom-  fji(cid:0)Y3i−1,FAi−1 (cid:1) if j ∈[4:5] 1 2 mondestinationD;(ii)theRY-networkinFig.1(b)whereone where A =[1 : 5(cid:0)], and of 2(cid:1)decoding functions φ such j source S has a message W1 for destinationD1 and a message that Dj, j ∈ [1 : 2], can decode the message Wj with W2 for destination D2; both in the Y- and in the RY-network high probability, i.e., Pr φj Yjn+3 6=Wj < ǫ. Moreover, the communicationoccursthroughan intermediate/relaynode the messages W and W have to remain secret from the 1 2 M; finally, we study (iii) the X-network in Fig. 1(c) where eavesdropper, i.e., I(W ,(cid:8)W ;(cid:0)Zn,F(cid:1)n) < ǫ(cid:9), ∀k ∈ [1 : 5]. A two sources S1 and S2 seek to communicatetwo independent non-negative rate pair (1R1,R22)kis seAcurely achievable if, for messagesW1 andW2 totwodifferentdestinationsD1 andD2 anyǫ>0,thereexistsasecurecodingschemewithparameters via two intermediate nodes M1 and M2. (N1,N2,n,ǫ) such that Rj < n1Nj −ǫ, ∀j ∈[1:2]. Each communication link is an independent erasure chan- nel with one legitimate receiver and one possible passive The main contribution of this paper is the characterization eavesdropper. In each network, the eavesdropper wiretaps of the secret message capacity region (the largest securely one channel, which one exactly is not known. The erasure achievablerate pair(R1,R2)) forthe threenetworksin Fig.1 probabilities are denoted as δ and δ , with j ∈ [1 : 3] for as described in the next three theorems. j jE theY-andtheRY-networkandj ∈[1:5]fortheX-network2. Theorem 1. The secret message capacity region of the Y- The j-th channel input at time instant i, with i ∈ [1 : n] network in Fig. 1(a) with unlimitedprivate randomnessatthe (where n is the total number of transmissions), is denoted sources S and S and no private randomness at the relay as X ∈ FL and referred to as a packet. Without loss of 1 2 ji q generality, in what follows we let Llog(q) = 1, i.e., we 3Although definitions are given only for the X-network, they straightfor- express the rate in terms of packets. Similarly, Y and Z ji ji wardlyextendtotheY-andtheRY-network. 4 Theresults here presented readily extend tothecase whenintermediate 2Indexj enumeratesthechannel,e.g.,fortheRY-networkinFig.1(b)the nodescangenerateprivaterandomnessatfiniterate.Ifintermediatenodescan channels fromStoM,fromMtoD1 and fromMtoD2 are referred toas generaterandomnessatinfiniterateanaivelink-by-linktimesharingstrategy channels 3,1and2,respectively. wouldbecapacity-achieving. node M, is the feasible region of the following LP, time-sharedamongthe two sessions and (ii) link sharing, i.e., the shared link is time-shared among the two sessions. max g(R ,R ) 1 2 s.t. k ≥R 1−δjE , j ∈[1:2] j j1−δjδjE A. Achievability k ≥(R +R ) 1−δ3E 3 1 2 1−δ3δ3E Our secure coding schemes for the networks in Fig. 1 con- Rj + kj ≤1, j ∈[1:2] 1−δj (1−δj)δjE sist of two phases, namely the key-sharing and the message- R1+R2 + k3 ≤1 transmission. In what follows we describe these two phases 1−δ3 (1−δ3)δ3E k ≤ k1 + k2 (1−δ3)δ3E and explain how these relate to the LPs in Theorems 1-3. 3 δ1E δ2E 1−δ3δ3E 1) The Y-network: On channel j ∈ [1 : 2], source S Ri,kj(cid:16)≥0, i∈[(cid:17)1:2], j ∈[1:3], sends kj independent random packets generated fromj where g(R1,R2) can be any linear function of (R1,R2). her priv(1a−teδjr)aδnjEdomness(assumed to be infinite). Out of these, a total of k packets are received by the relay node M, but Theorem 2. The secret message capacity region of the RY- j not by the possible eavesdropper. We do not know exactly network in Fig. 1(b) with limited private randomness of rate D0 at the source S and no private randomness at the relay which packets, out of the δkjjE ones received by M, are also node M, is the feasible region of the following LP, received by the possible eavesdropper. However, out of the packets received by M, we can always create k independent j max g(R1,R2) packets, which are also independent of the packets received s.t. k +e(1−δ3)δ3E ≥(R +R ) 1−δ3E by the possible eavesdropper. We do this by multiplying the k3≥R1−1δ−3δδ3jEE , j ∈1[1:2]2 1−δ3δ3E kj packets of M by an MDS code matrix of dimension j j1−δjδjE δjE R1+R2 + k3 + e ≤1 kj ×[k ]. Thus, without loss of generality, we can assume 1−δ3 (1−δ3)δ3E 1−δ3 δjE j Rj + kj ≤1, j ∈[1:2] hthatiwe always know which packets are received by the k1−δ≤j (D(1−−δje))δj(E1−δ3)δ3E legitimate node and not by the possible eavesdropper, if we 3 0 1−δ3δ3E know their amount[7]. All these packets are used to generate kj ≤(e+ δk33E)(11−−δδjj)δδjjEE, j ∈[1:2] a secret key on channel j ∈ [1 : 2] between nodes Sj and Ri,e,kj ≥0, i∈[1:2], j ∈[1:3], M (key-sharing phase). These packets are then expanded by meansofanMDScodematrixofsize[k ]×[R ]andusedasin where g(R ,R ) can be any linear function of (R ,R ). j j 1 2 1 2 theone-timepad[2]toencryptR messagepackets,whichare j Theorem 3. The secret message capacity region of the X- sent using the ARQ protocol (message-transmission phase). network in Fig. 1(c) with unlimited private randomnessat the At the intermediate node M (assumed to be unable to sources S1 and S2 and no private randomness at the relay generate any randomness) there are k1 + k2 available nodes M and M , is the feasible region of the following LP, δ1E δ2E 1 2 randompackets(receivedfromS1 and(cid:16)S2(cid:17)onch(cid:16)anne(cid:17)ls1and2, max g(R ,R ) respectively).BymeansofanMDScodetheserandompackets 1 2 s.t. k ≥R 1−δjE , j ∈[1:2] are first expandedby a factor 1 andthen only k3 kj +e(j1−1−δ3δ)jδδ3jEE ≥(R +R ) 1−δ3E of them are sent to node D. W1i−thδ3tδh3iEs, the number of(1r−anδ3d)oδ3mE 3 1−δ3δ3E 1 2 1−δ3δ3E packets received by D, but not by the possible eavesdropper k ≥R 1−δjE , j ∈[4:5] j j−31−δjδjE is k3. These randompackets are used to generate a secret key Rj + kj ≤1, j ∈[1:2] on channel 3 between nodes M and D (key-sharing phase). 1−δj (1−δj)δjE Rj−3 + kj ≤1, j ∈[4:5] Similartochannels1and2,alsoforchannel3weexpandthese R1−1+δjR2 +(1−δjk)δ3jE + e ≤1 k3 packets by means of an MDS code matrix of dimension k1−≤δ3(k1 (+1−kδ32)δ−3Ee)(11−−δδ33)δ3E [k3]×[R1+R2] and then we use them to encrypt R1+R2 3 δ1E δ2E 1−δ3δ3E message packets as in the one-time pad [2]. These message kj ≤(e+ δk33E)(11−−δδjj)δδjjEE, j ∈[4:5] packets are finally transmitted by using the ARQ protocol R ,e,k ≥0, i∈[1:2], j ∈[1:5], (message-transmission phase). i j The scheme described above is equivalent to the LP in where g(R ,R ) can be any linear function of (R ,R ). 1 2 1 2 Theorem1,wherethevariablesR andk ,withi∈[1:2]and i j III. SECURE CAPACITY CHARACTERIZATION j ∈[1:3], representthe message rate for the pair Si−D and thekeycreatedonchannelj,respectively.Inparticular:(i)the We here describethe secure codingschemes5 and the outer first and second inequalities are security constraints, i.e., they bounds and formulate them as LPs. We then show through ensure that the key that is generated is greater than the key numerical evaluations the benefits (in terms of achievable which is consumed6; (ii) the third and fourth inequalities are securerate)ofourschemewithrespecttotwonaivestrategies: time constraints, i.e., the length of the key generation phase (i) path sharing, i.e., the whole communication resources are 5 Schemesconsidertheexpectednumberoftransmissionsneeded.Similar 6Sincetheencryptedpackets aresentbyusingtheARQprotocol, thekey to[5],itcanbeshownthatthenumberoftransmissionsneededconcentrates consumedonchannelj∈[1:3](i.e.,thenumberofpacketsreceivedbythe exponentiallyfastaroundtheaverageenablingtoachieveexpectedratevalues. possibleeavesdropperonthatchannel)isRj11−−δjδjδEjE,withR3=R1+R2. plus the length of the message sending phase cannot exceed operations used on channels 1 and 2 of the Y-network, while the total available time; (iii) the fifth inequality follows since onchannels3-5thesamestrategyproposedfortheRY-network nodeM haszero randomnessand so the keythat it can create applies, with the small difference that the available finite is constrained by the randomness received from S and S . randomness at node S of the RY-network (node M in the 1 2 1 2) The RY-network: In [7], the authors showed that, in a X-network) is now replaced by D = k1 + k2 . line network where a node has limited randomness and the 0 δ1E δ2E (cid:16) (cid:17) (cid:16) (cid:17) nextnodecangeneraterandomnessbasedontheonereceived B. Converse from the previous node(s), a combination of ARQ and MDS Weherehighlightthemainstepstoderiveanouterboundon coding is needed for optimally generating the key. Following thesecurecapacityforthenetworksinFig.1andtoformulate this, on channel 3 of the RY-network, the source S transmits itasanLP;thecompleteproofcanbefoundintheAppendix. eindependentrandompacketsusingtheARQprotocol.These Step 1. We prove and make use of the following lemma (see packetsareallreceivedbytherelaynodeM,whilethepossible AppendixAforthedetails),whichisageneralizationofthose eavesdropper receives a fraction 1−δ3E of them. By means in [7] for the line network. 1−δ3δ3E of an MDS code, the remaining (D −e) random packets at 0 Lemma 4. For any j ∈A, we have the source S are expanded by a factor 1 and then only k3 of them are sent to node M. T1h−uδs3,δ3tEhe total number n o(1f−pδa3)cδk3eEts received by the intermediate node M, but not by (1−δj)δjE H Xji|Yji−1,Zji−1,Fji−1,WB,FAn\j − the possibleeavesdropperisk +e(1−δ3)δ3E. Similar tothe Y- Xi=1 (cid:16) (cid:17) 3 1−δ3δ3E n network,fromthee+k3 independentrandompacketsreceived (1−δ ) I Yi−1,Fi−1;X |Zi−1,Fi−1,W ,Fn δ3E jE j j ji j j B A\j baryeMus,ewdetoggeenneerraateteka3s+ecere(t11−−keδδy33)δoδ33EnEcphaacnkneetls.3AbleltwtheeesnenpoadceksetSs =H YjnXi|=W1 B(cid:16),FAn,Zjn , (1a(cid:17)) andM(key-sharingphase).Theyarethenexpandedbymeans n ofanMDScodematrixofsize k +e(1−δ3)δ3E ×[R +R ] (1−δ(cid:0)j) H Xji|Yj(cid:1)i−1,Fji−1,WB,FAn\j andusedasintheone-timepad[h2]3toenc1r−yδp3tδR3E1+i R2m1essag2e =H YnXi=|1W (cid:16),Fn , (cid:17) (1b) packets, which are sent using the ARQ protocol (message- j B A n transmission phase). I(cid:0)W ;X |Fn(cid:1) ,Zi−1,Fi−1 < ǫ , (1c) At the relay node M (assumed unable to generate any B ji A\j j j 1−δ jE randomness) there are k3 +e available random packets Xi=n1 (cid:16) (cid:17) (received from S on cha(cid:16)nδn3eEl 3).(cid:17)By means of an MDS code I WB;Xji|Yji−1,Zji−1,Fji−1,FAn\j ≥1−nδRjδ , (1d) these random packets ∀j ∈ [1 : 2] are first expanded by a Xi=1 (cid:16) (cid:17) jE j factor 1 andthenonly kj ofthemaresenttonode where: (i) for the Y-network A = [1 : 3], B = {j}, W = Dj on1c−hδajnδjnEel j. With this,(1t−hδej)nδujEmber of random packets W[1:2] andR3 =R1+R2;(ii)fortheRY-networkA=[13:3], received by Dj, but not by the possible eavesdropper is kj. B = [1 : 2] and R3 = R1+R2; (iii) for the X-network A = These packets are used to generate a secret key on channel [1 : 5], B = {j}, W = W = W = W , R = R +R , 3 4 5 [1:2] 3 1 2 j between nodes M and Dj (key-sharing phase). These kj R4 = R1 and R5 = R2. Moreover, for node D in the Y- packetsarethenexpandedbymeansofanMDScodematrixof network we have dimension[kj]×[Rj] andused to encryptRj message packets n as in the one-time pad [2]. These message packets are finally nR ≤(1−δ ) I W ,W ;X |Fn ,Yi−1,Fi−1 , (1e) 3 3 1 2 3i A\3 3 3 transmitted using ARQ (message-transmission phase). Xi=1 (cid:16) (cid:17) Similar to the Y-network, also for the RY-network the for node D , j ∈[1:2] in the RY-network, we have j secure transmission strategy above is equivalent to the LP in n Theorem 2. The variables Ri, kj and e, with i ∈ [1 : 2] and nR ≤(1−δ ) I W ;X |Fn ,Yi−1,Fi−1 , (1f) j ∈ [1 : 3], represent the message rate for the pair S−D , j j j ji A\j j j i the key created on channel j, and the extra randomness (in Xi=1 (cid:16) (cid:17) and for node D , j ∈[1:2] in the X-network, we have addition to the one sent for generating the key k ) sent from j 3 thesourceS,respectively.Inparticular:(i)thefirstandsecond n inequalitiesaresecurityconstraints;(ii)thethirdandthefourth nRj≤(1−δj+3) I Wj;Xj+3i|FAn\j+3,Yji+−31,Fji+−31 . inequalities are time constraints; (iii) the fifth (respectively, Xi=1 (cid:16) (cid:17) (1g) sixth) inequality is due to the fact that the key that node S (respectively, M) can create is constrained by its limited Step2.Weusethecorrespondencesbetweenthetermsin(2)at randomness(respectively,therandomnessthatitgetsfromS). the top of the next page with: (i) A=[1:3], κ=3, λ∈[1:2] 3) The X-network: The secure transmission strategy here and W =W for the Y-network; (ii) A=[1 : 3], j =3, 3 [1:2] proposed for the X-network in Fig. 1(c) consists of a mix κ∈[1 : 2], B=[1 : 2], W =W and e =e for the RY- 3 [1:2] 3 of the two schemes designed for the Y- and the RY-network. network;(iii) A=[1:5], j=3, λ∈[1:2], κ∈[4:5], B={j}, In particular, on channels 1 and 2 we use exactly the same W =W =W =W and e =e for the X-network. All the 3 4 5 [1:2] 3 n δ (1−δ ) nk ↔ jE j (1−δ δ ) H X |Yi−1,Zi−1,Fi−1,W ,Fn −H Yn|W ,Fn , (2a) j δ (1−δ ) j jE ji j j j B A\j j B A j jE ! Xi=1 (cid:16) (cid:17) (cid:0) (cid:1) n 1−δ δ ne ↔ j jE H Yn|W ,Fn −(1−δ ) H X |Yi−1,Zi−1,Fn ,Fi−1,W , (2b) j δ (1−δ ) j B A j ji j j A\j j B j jE ! n (cid:0) (cid:1) Xi=1 (cid:16) (cid:17) nk ↔δ (1−δ ) H X |Yi−1,Zi−1,Fi−1,W ,W ,Fn , (2c) κ κE κ κi κ κ κ 1 2 A\κ Xi=1 (cid:16) (cid:17) n nk ↔δ (1−δ ) H X |Yi−1,Fi−1,W ,Fn . (2d) λ λE λ λi λ λ λ A\λ Xi=1 (cid:16) (cid:17) 0.035 TS (Path) Achievable Region 0.018 0.04 0.03 TASch (iLeivnakb) lAe cRheiegvioanb lien RTehgeioornem 1 0.016 0.035 0.014 0.03 0.025 0.012 0.025 0.02 0.01 R2 R2 R20.02 0.015 0.008 0.015 0.006 0.01 0.004 0.01 0.005 0.002 TTSS ((PLiantkh)) AAcchhieievvaabblele RReeggioionn 0.005 TTSS ((PLiantkh)) AAcchhieievvaabblele RReeggioionn Achievable Region in Theorem 2 Achievable Region in Theorem 3 00 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 00 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 R1 R1 R1 (a) Y-network:(δ1,δ1E)=(0.2,0.05),(δ2,δ2E)= (b) RY-network: (δ1,δ1E) = (0.1,0.1), (c) X-network: (δ1,δ1E)=(0.1,0.1),(δ2,δ2E)= (0.3,0.05),(δ3,δ3E)=(0.25,0.05). (δ2,δ2E) = (0.2,0.05), (δ3,δ3E) = (0.3,0.15), (0.2,0.05), (δ3,δ3E)=(0.3,0.15), (δ4,δ4E)= D0=0.4. (0.4,0.35),(δ5,δ5E)=(0.5,0.2). Fig. 2: Numerical evaluations for the three networks in Fig. 1. quantities in (2) are non-negative (see Appendix B-D for the thesechannelconditions,fortheY-networktheindividualrates details). aredoublethanthoseachievedbythelink-sharingstrategy;for Step 3. By using the correspondences in (2), Lemma 4 and the RY- and X-network the sum-rate is twice than that of the information theoretic properties we derive an outer bound on link-sharing scheme. In general, these gains follow since: (i) thesecurecapacity(seeAppendicesB-Dforthedetails).Each in the Y-network S and S transmit random packets to M 1 2 constraint in the LPs in Theorems 1-3 is proved to match an and these can be mixed to create a key on the shared link; outer bound. (ii) in the RY-network the same set of random packets can be used to generate keys for both the M-D and M-D links. 1 2 C. Numerical Evaluations Thesefactorsdecreasethenumberofrandompacketsrequired We here compare the secrecy capacity performance of our to be sent from the source(s) and implies that more message schemesinTheorems1-3withrespecttotwo naivestrategies, packetscanbecarried.Finally,(iii)inthe X-networkwe have i.e.,the pathsharingandthelink sharing.Inthe pathsharing the benefits of both the Y- and RY-network. the whole communication resources, at each time instant, are IV. CONCLUSIONS used only by one session; for example, for the X-network we have a time-sharing between S -M -M -D and S -M -M - We characterized the secret capacity for networks where 1 1 2 1 2 1 2 D . Differently, in the link sharing strategy only the shared two unicast sessions share one communicationlink. This was 2 communication link is time-shared among the two unicast attained by designing schemes and by deriving outer bounds sessions; for example, in the X-network only the M -M link whichwereformulatedasLPs.Throughnumericalevaluations 1 2 is time-shared. For both these strategies we do not allow weshowedthatourtransmissionstrategiesachievehigherrates the source node that does not participate to act as a source comparedto schemes where the communicationresourcesare of randomness, e.g., for the X-network the random packets time-shared among the two sessions. These results show that, sent by S cannot be used to encrypt the message packets of even in network configurations for which network coding 1 S . Fig.2showstheperformance(intermsofsecrecycapacity does not offer benefits in absence of security, it can become 2 region)ofthesetwotime-sharingstrategiesandofourschemes beneficial under security constraints. in Theorems1-3.FromFig. 2 we observethatourschemesin Theorems1-3(solidline)achievehigherratescomparedtothe APPENDIX A twotime-sharingstrategies.Largerategainsareattainedwhen, PROOF OF LEMMA4 for each channel,the eavesdropperreceivesalmosteverything WehereprovetheresultinLemma4.Westartbyanalyzing andthelegitimatenodereceivesalmostnoinformation.Under the Y-network. We have, ∀j ∈ A = [1 : 3] and with W = 3 W , By means of similar steps, it is not difficult to prove the [1:2] result in (1a) and (1b) for the RY- and X-network. H Yjn|Wj,FAn,Zjn We have, ∀j ∈A=[1:3] and with W3 =W[1:2], =H Yn,Fn|W ,Fn,Zn (cid:0) j j j (cid:1)A j (a) ǫ > I W ;Zn,Fn (=a)H(cid:0)Yn−1,Fn−1|W ,F(cid:1)n,Zn j j A j j j A j +H (cid:0)Yjn,Fjn|Wj,FAn,Zjn,Yjn−(cid:1)1 (=b)I(cid:0)Wj;Zjn,Fjn(cid:1)|FAn\j n(cid:16) (cid:17) (=b)H(cid:0) Yjn−1,Fjn−1|Wj,FAn\j,Zjn(cid:1)−1,Fjn−1 (=c) I Wj;Zji,Fji|FAn\j,Fji−1,Zji−1 −I Y(cid:16)n−1,Fn−1;Z ,F |W ,Fn ,Zn−(cid:17)1,Fn−1 Xi=1 (cid:16) (cid:17) j j jn jn j A\j j j n +H(cid:16)Y |W ,Fn,Zn,Yn−1 (cid:17) (=d) I Wj;Zji|FAn\j,Fji−1,Fji,Zji−1 jn j A j j (=c)H(cid:0) Yn−1,Fn−1|W ,Fn (cid:1),Zn−1,Fn−1 Xin=1 (cid:16) (cid:17) j j j A\j j j = (1−δ )I W ;X |Fn ,Fi−1,Zi−1 , jE j ji A\j j j −I Y(cid:16)jn−1,Fjn−1;Zjn|Wj,FAn\j,Zjn−1,Fjn(cid:17)−1,Fjn Xi=1 (cid:16) (cid:17) +H(cid:16)Y |W ,Fn,Zn,Yn−1 (cid:17) where: (i) the inequality in (a) is due to the security con- jn j A j j straints;(ii)theequalityin(b)followsfromtheindependence =H(cid:0)Yjn−1,Fjn−1|Wj,FAn\j(cid:1),Zjn−1,Fjn−1 of Wj on FAn\j; (iii) the equality in (c) is due to the chain −(1−(cid:16) δ )I Yn−1,Fn−1;X |W ,Fn (cid:17),Zn−1,Fn−1 rule ofthe mutualinformation;(iv)finally, the equalityin (d) jE j j jn j A\j j j isduetothe independenceofWj onFji. Thisproves(1c)for +(1−δ )δ (cid:16)H X |W ,Fn ,Fn−1,Zn−1,Yn−1 , (cid:17) the Y-network.By means of similar steps, it is not difficultto j jE jn j A\j j j j prove the result in (1c) for the RY- and X-network. (cid:16) (cid:17) We then have, ∀j ∈ A = [1 : 3], with W = W and where: (i) the equality in (a) follows from the chain rule 3 [1:2] R =R +R , of the entropy; (ii) the equality in (b) is due to the defini- 3 1 2 tion of mutual information; (iii) finally, the equality in (c) (a) nR ≤ I(W ;Yn,Fn)+nǫ is because Fjn is independent of the rest of the random j j 3 A variables. By recursively proceeding in the same way for (b) ≤ I W ;Yn,Fn +nǫ H Yn−1,Fn−1|W ,Fn ,Zn−1,Fn−1 , we get the result j j A in (cid:16)(1aj). We njow projveA(1\bj) wjhich isjsim(cid:17)ilar to (1a). (=c)I (cid:0)Wj;Yjn,FAn,(cid:1)Zjn −I Wj;Zjn|Yjn,FAn +nǫ We have, ∀j ∈A=[1:3] and with W3 =W[1:2], (≤d)I(cid:0)W ;Yn,Fn,Zn(cid:1) +n(cid:0)ǫ (cid:1) j j A j H Yjn|Wj,FAn (=e)I (cid:0)W ;Yn,Fn,Zn(cid:1)|Fn +nǫ =H Yn,Fn|W ,Fn j j j j A\j (cid:0) j j (cid:1) j A n(cid:16) (cid:17) (=a)H(cid:0)Yjn−1,Fjn−1|W(cid:1)j,FAn (=f) I Wj;Yji,Fji,Zji|Yji−1,Fji−1,Zji−1,FAn\j +nǫ +H (cid:0)Yjn,Fjn|Wj,FAn,Yjn−(cid:1)1 Xi=n1 (cid:16) (cid:17) (=b)H(cid:0) Yn−1,Fn−1|W ,Fn (cid:1),Fn−1 (=g) I Wj;Yji,Zji|Yji−1,Fji−1,Zji−1,FAn\j,Fji +nǫ j j j A\j j Xi=1 (cid:16) (cid:17) −I Y(cid:16)jn−1,Fjn−1;Fjn|Wj,FAn\j,Fjn(cid:17)−1 =(1−δ δ ) n I W ;X |Yi−1,Fi−1,Zi−1,Fn +nǫ, jE j j ji j j j A\j +H(cid:16)Yjn|Wj,FAn,Yjn−1 (cid:17) Xi=1 (cid:16) (cid:17) (=c)H(cid:0) Yn−1,Fn−1|W ,F(cid:1)n ,Fn−1 where:(i)theinequalityin(a)isduetotheFano’sinequality; j j j A\j j (ii)theinequalityin(b)followsfromtheMarkovchainW − j +H (cid:16)Y |W ,Fn,Yn−1 (cid:17) Yn,Fn−Yn for j ∈ [1 : 2]; (iii) the equality in (c) is due jn j A j j A 3 tothe chainrule ofthemutualinformation;(iv)the inequality =H(cid:0)Yn−1,Fn−1|W ,F(cid:1)n ,Fn−1 j j j A\j j in (d) is because the mutual information is a non-negative +(1−(cid:16) δ )H X |W ,Fn ,Fn−1,(cid:17)Yn−1 , quantity;(v)theequalityin(e)followsfromtheindependence j jn j A\j j j of W and Fn ; (vi) the equality in (f) is due to the chain (cid:16) (cid:17) j A\j where: (i) the equality in (a) follows from the chain rule ruleofthemutualinformation;(vii)finally,theequalityin(g) of the entropy; (ii) the equality in (b) is due to the defini- followsfromtheindependenceofWj onFji.Thisproves(1d) tion of mutual information; (iii) finally, the equality in (c) fortheY-network.Bymeansofsimilarsteps,itisnotdifficult is because F is independent of the rest of the random to prove the result in (1d) for the RY- and X-network. jn variables. By recursively proceeding in the same way for Finally, for node D in the Y-network we have H Yjn−1,Fjn−1|Wj,FAn\j,Fjn−1 , we get the result in (1b). n(R1+R2) (cid:16) (cid:17) (≤a)I(W1,W2;Y3n,FAn)+nǫ (≥e)(1−δλE) n H Xλi|Yλi−1,Zλi−1,Fλi−1,FAn\λ " (=b)I W ,W ;Yn,Fn|Fn +nǫ Xi=1 (cid:16) (cid:17) 1 2 3 3 A\3 n (=c) n(cid:16)I W1,W2;Y3i,F3i|Y3(cid:17)i−1,F3i−1,FAn\3 +nǫ −Xi=1I(cid:16)Xλi;Wλ|Zλi−1,Fλi−1,FAn\λ(cid:17) Xi=n1 (cid:16) (cid:17) − n H X |Zi−1,Fi−1,W ,Fn ,Yi−1 (=d) I W ,W ;Y |Yi−1,Fi−1,Fn ,F +nǫ λi λ λ λ A\λ λ # 1 2 3i 3 3 A\3 3i Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) n (cid:17) (=f)(1−δ ) n I X ;W |Yi−1,Zi−1,Fi−1,Fn =(1−δ3) I W1,W2;X3i|Y3i−1,F3i−1,FAn\3 +nǫ, λE "Xi=1 (cid:16) λi λ λ λ λ A\λ(cid:17) Xi=1 (cid:16) (cid:17) n − I X ;W |Zi−1,Fi−1,Fn where:(i)theinequalityin(a)isduetotheFano’sinequality; λi λ λ λ A\λ # (ii) the equality in (b) follows from the independence of the Xi=1 (cid:16) (cid:17) (g) 1−δ pair (W1,W2) on FAn\3; (iii) the equality in (c) is due to the ≥ nRλ1−δ λδE −ǫ, chain rule of the mutual information; (iv) finally, the equality λE λ in(d)followsfromtheindependenceofthepair(W ,W )on where: (i) the inequality in (a) follows since the entropy of 1 2 F . This proves (1e) for the Y-network. By means of similar a discrete random variable is positive; (ii) the equality in (b) 3i steps, it is not difficult to prove the results in (1f) and in (1g) is due to (1a) in Lemma 4; (iii) the equality in (c) is due for the RY- and X-network, respectively. to the definition of mutual information; (iv) the equality in (d) follows from the definition of mutual information; (v) APPENDIX B the inequality in (e) follows from the ‘conditioning reduces PROOF OF THECONVERSE OF THEOREM1 the entropy’ principle; (vi) the equality in (f) is due to the definition of mutual information; (vii) finally, the inequality For the Y-network, in (2) we set A = [1 : 3], κ = 3 and in (g) follows by means of (1c) and (1d) in Lemma 4. λ∈[1:2]. We use (2c) and (2d) for proving the converse of Second constraint. From (2c) with W =W , we have Theorem 1. The RHS of these two expressions is positive as [1:2] 3 the entropy of a discrete random variable is positive. nk 3 First constraint. From (2d) we have n =δ (1−δ ) H X |Yi−1,Zi−1,Fi−1,W ,Fn 3E 3 3i 3 3 3 3 A\3 nk λ Xi=1 (cid:16) (cid:17) n (a) n =δλE(1−δλ) H Xλi|Yλi−1,Fλi−1,Wλ,FAn\λ ≥δ3E(1−δ3) H X3i|Y3i−1,Z3i−1,F3i−1,W3,FAn\3 Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17) n −H(Yn|W ,Fn,Zn) ≥δ (1−δ ) H X |Yi−1,Zi−1,Fi−1,W ,Fn 3 3 A 3 λE λ λi λ λ λ λ A\λ n Xi=n1 (cid:16) (cid:17) (=b)(1−δ3E) I Y3i−1,F3i−1;X3i|Z3i−1,F3i−1,W3,FAn\3 (≥a)δ (1−δ ) H X |Yi−1,Zi−1,Fi−1,W ,Fn Xi=1 (cid:16) (cid:17) λE λ λi λ λ λ λ A\λ n −H(Yλn|WλX,i=F1An,Z(cid:16)λn) (cid:17) (=c)(1−δ3E)"Xi=1H(cid:16)X3i|Z3i−1,F3i−1,W3,FAn\3(cid:17) n n (=b)(1−δλE) I Yλi−1,Fλi−1;Xλi|Zλi−1,Fλi−1,Wλ,FAn\λ − H X3i|Z3i−1,F3i−1,W3,FAn\3,Y3i−1 Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17)# n n (=c)(1−δλE) H Xλi|Zλi−1,Fλi−1,Wλ,FAn\λ (=d)(1−δ3E) H X3i|Z3i−1,F3i−1,FAn\3 "Xi=1 (cid:16) (cid:17) "Xi=1 (cid:16) (cid:17) n n − H X |Zi−1,Fi−1,W ,Fn ,Yi−1 − I X ;W |Zi−1,Fi−1,Fn λi λ λ λ A\λ λ 3i 3 3 3 A\3 # Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17) n n (=d)(1−δ ) H X |Zi−1,Fi−1,Fn − H X |Zi−1,Fi−1,W ,Fn ,Yi−1 λE λi λ λ A\λ 3i 3 3 3 A\3 3 " # Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17) n n (e) − I Xλi;Wλ|Zλi−1,Fλi−1,FAn\λ ≥ (1−δ3E) H X3i|Y3i−1,Z3i−1,F3i−1,FAn\3 Xi=1 (cid:16) (cid:17) "Xi=1 (cid:16) (cid:17) n n − H X |Zi−1,Fi−1,W ,Fn ,Yi−1 − I X ;W |Zi−1,Fi−1,Fn λi λ λ λ A\λ λ 3i 3 3 3 A\3 # Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17) n (b) − H X3i|Z3i−1,F3i−1,W3,FAn\3,Y3i−1 ≤ (1−δ3)n # n Xi=1 (cid:16) n (cid:17) −(1−δ3) H X3i|FAn\3,Y3i−1,F3i−1,W[1:2] (=f)(1−δ3E) I X3i;W3|Y3i−1,Z3i−1,F3i−1,FAn\3 Xi=1 (cid:16) (cid:17) "Xi=1 (cid:16) (cid:17) (≤c)(1−δ )n n 3 − I X ;W |Zi−1,Fi−1,Fn n Xi=1 (cid:16) 3i 3 3 3 A\3(cid:17)# −(1−δ3) H X3i|FAn\3,Y3i−1,F3i−1,W[1:2],Z3i−1 (≥g)n(R1+R2)11−−δδ3Eδ −ǫ, (=d)(1−δ )Xi=n1− k(cid:16)3 n, (cid:17) 3E 3 3 δ 3E where: (i) the inequality in (a) follows since the entropy of a discrete random variable is positive; (ii) the equality in (b) where: (i) the equality in (a) follows from the definition of is due to (1a) in Lemma 4; (iii) the equality in (c) is due mutual information; (ii) the inequality in (b) is due to the to the definition of mutual information; (iv) the equality in fact that H(X3i)≤1; (iii) the inequality in (c) is due to the (d) follows from the definition of mutual information; (v) ‘conditioning reduces the entropy’ principle; (iv) finally, the the inequality in (e) follows from the ‘conditioning reduces equality in (d) follows from (2c). the entropy’ principle; (vi) the equality in (f) is due to the Fifth constraint. From (1b) and (2d) we have definition of mutual information; (vii) finally, the inequality k k (1−δ )δ in (g) follows by means of (1c) and (1d) in Lemma 4 with n 1 + 2 3 3E δ δ 1−δ δ R3 =R1+R2. (cid:18) 1E 2E(cid:19) 3 3E Third constraint. From (2d) we get =[H(Yn|W ,Fn)+H(Yn|W ,Fn)](1−δ3)δ3E 1 1 A 2 2 A 1−δ δ 3 3E n (a) (1−δ )δ nδkλ =(1−δλ) H Xλi|Yλi−1,Fλi−1,Wλ,FAn\λ ≥ H Y1n|W[1,2],FAn +H Y2n|W[1,2],FAn 1−δ33δ33EE λE =H(Yλn|WXiλ=,1FAn)(cid:16), (cid:17) (≥b)(cid:2)H (cid:0)Y1n,Y2n|W[1,2],(cid:1)FAn (11(cid:0)−−δδ3δ)δ3E (cid:1)(cid:3) 3 3E where the last equality follows from (1b) in Lemma 4. (c) (cid:0) (cid:1) (1−δ )δ ≥ I Yn,Yn;Yn,Zn|W ,Fn 3 3E Using this and Fano’s inequality with λ∈[1:2] we have 1 2 3 3 [1,2] A 1−δ δ 3 3E nR + nkλ ≤I(W ;Yn,Fn)+H(Yn|W ,Fn)+nǫ (=d) (1(cid:0)−δ3)δ3E· (cid:1) λ δ λ 3 A λ λ A 1−δ δ λE 3 3E (a) n ≤ I(Wλ;Yλn,FAn)+H(Yλn|Wλ,FAn)+nǫ I Y1n,Y2n;Y3i,Z3i|W[1,2],FAn,Y3i−1,Z3i−1 (=b)I(Wλ;Yλn|FAn)+H(Yλn|Wλ,FAn)+nǫ Xi==1(1−(cid:0) δ )δ · (cid:1) 3 3E (=c)H(Yn|Fn)+nǫ n λ A I Yn,Yn;X |W ,Fn ,Yi−1,Fi−1,Zi−1,Fn (d) 1 2 3i [1,2] A\3 3 3 3 3i ≤ n(1−δλ)+nǫ, Xi=1 (cid:16) (cid:17) (e) where:(i)theinequalityin(a)followsbecauseoftheMarkov = (1−δ3)δ3E· chain W − Yn,Fn − Yn, ∀λ ∈ [1 : 2]; (ii) the equality n in (b) foλllowsλbecaAuse of3the independence between FAn and H X3i|W[1,2],FAn\3,Y3i−1,F3i−1,Z3i−1,F3ni Wλ; (iii) the equality in (c) follows from the definition of Xi=1 (cid:16) (cid:17) n mutual information; (iv) finally, the inequality in (d) follows (=f)(1−δ )δ H X |W ,Fn ,Yi−1,Fi−1,Zi−1 since node M receives at most n(1−δ ) packets on channel 3 3E 3i [1,2] A\3 3 3 3 λ λ∈[1:2]. Xi=1 (cid:16) (cid:17) (g) Fourth constraint. From (1e) in Lemma 4 we have = nk3, n(R1+R2) where: (i) the inequality in (a) is due to the ‘conditioning n reducestheentropy’principle;(ii)theinequalityin(b)follows ≤(1−δ3) I W[1:2];X3i|FAn\3,Y3i−1,F3i−1 since H(X,Y) ≤ H(X) + H(Y); (iii) the inequality in Xi=1 (cid:16) (cid:17) (c) follows since the entropy of a discrete random variable n (=a)(1−δ ) H X |Fn ,Yi−1,Fi−1 is a positive quantity; (iv) the equality in (d) is due to the 3 3i A\3 3 3 chain rule of the mutual information; (v) the equality in (e) " Xi=1 (cid:16) (cid:17) follows since node M does not have any randomness and n − H X3i|FAn\3,Y3i−1,F3i−1,W[1:2] so X3i is uniquely determined by knowing (Y1n,Y2n,FAn); Xi=1 (cid:16) (cid:17)# (vi) the equality in (f) is due to the Markov chain X3i − W ,W ,Fn ,Yi−1,Fi−1,Zi−1−Fn;(vii)finally,theequal- where: (i) the equality in (a) follows from the chain rule 1 2 A\3 3 3 3 3i ity in (g) follows from (2c). of the entropy; (ii) the inequality in (b) is due to the conditioning reduces the entropy principle; (iii) finally, the APPENDIX C equality in (c) follows because of the Markov chain X − 3i PROOF OF THECONVERSE OF THEOREM2 W ,Fn ,Yi−1,Zi−1,Fi−1−Fn. [1:2] A\3 3 3 3 3i First constraints. From (2a) and (2b) we have For the RY-network, in (2) we set A = [1 : 3], j = 3, κRi∈gh[-1H:a2n]d,BSid=e[(1R:H2S])anodf teh3e=quea.nWtiteiesstairntb(2y)pirsopvoinsgititvhea.tWthee nk3+ne(11−−δδ3)δδ3E 3 3E use(2a),(2b)and(2c)forprovingtheconverseofTheorem2. n TheRHSof(2c)ispositiveastheentropyofadiscreterandom =δ (1−δ ) H X |Yi−1,Zi−1,Fi−1,W ,Fn 3E 3 3i 3 3 3 [1:2] A\3 variable is positive. For the RHS of (2a), we have Xi=1 (cid:16) (cid:17) n (1−δ δ ) n H X |Yi−1,Zi−1,Fi−1,W ,Fn (≥a)δ3E(1−δ3) H X3i|Y3i−1,Z3i−1,F3i−1,W[1:2],FAn\3 3 3E 3i 3 3 3 [1:2] A\3 Xi=1 (cid:16) (cid:17) −H Yn|W Xi=1,Fn(cid:16),Zn (cid:17) −H Yn|W ,Fn 3 [1:2] A 3 3 [1:2] A n (=b)(1(cid:0)−δ )· (cid:1) (=a) (cid:0) H Y ,Z |(cid:1)Yi−1,Zi−1,Fn,W ,Fn 3E 3i 3i 3 3 3 [1:2] A\3 n −HXi=1Yn|(cid:16)W ,Fn (cid:17) I Y3i−1,F3i−1;X3i|Z3i−1,F3i−1,W[1:2],FAn\3 3 [1:2] A Xi=1 (cid:16) (cid:17) (=b)H(cid:0) Y3n,Z3n|F3n,(cid:1)W[1:2],FAn\3 −H Y3n|W[1:2],FAn (=c)(1−δ3E) n H X3i|Z3i−1,F3i−1,W[1:2],FAn\3 =H (cid:16)Y3n,Z3n|W[1:2],FAn −H (cid:17)Y3n|W[(cid:0)1:2],FAn ≥0, (cid:1) "Xi=1 (cid:16) (cid:17) n where:(cid:0)(i) the equality in (a(cid:1)) follo(cid:0)ws because giv(cid:1)en F3i, the − H X3i|Z3i−1,F3i−1,W[1:2],FAn\3,Y3i−1 pair (Y3i,Z3i) is equal to X3i with probability (1−δ3δ3E) Xi=1 (cid:16) (cid:17)# and null otherwise and because of the Markov chain X − n W ,Fn ,Yi−1,Zi−1,Fi−1−Fn; (ii) finally,the equ3aility (=d)(1−δ3E) H X3i|Z3i−1,F3i−1,FAn\3 [1:2] A\3 3 3 3 3i " in (b) is due to the chain rule of entropy. Xi=1 (cid:16) (cid:17) n For the RHS of (2b), we have − I X ;W |Zi−1,Fi−1,Fn 3i [1:2] 3 3 A\3 H Yn|W ,Fn Xi=1 (cid:16) (cid:17) 3 [1:2] A n −(cid:0)(1−δ ) n H(cid:1)X |Yi−1,Zi−1,Fn ,Fi−1,W − H X3i|Z3i−1,F3i−1,W[1:2],FAn\3,Y3i−1 (=a) n H3 YXi=1|W(cid:16) ,3Fi n,3Yi−1 3 A\3 3 [1:2](cid:17) (≥e)Xi(=11−δ(cid:16)3E) n H X3i|Y3i−1,Z3i−1,F3i−1,F(cid:17)An#\3 3i [1:2] A 3 " Xi=1 (cid:0) n (cid:1) − n I X ;XWi=1 |(cid:16)Zi−1,Fi−1,Fn (cid:17) −(1−δ ) H X |Yi−1,Zi−1,Fn ,Fi−1,W 3i [1:2] 3 3 A\3 3 Xi=1 (cid:16) 3i 3 3 A\3 3 [1:2](cid:17) Xi=n1 (cid:16) (cid:17) n (≥b) H Y |W ,Fn,Yi−1,Zi−1 − H X3i|Z3i−1,F3i−1,W[1:2],FAn\3,Y3i−1 3i [1:2] A 3 3 # i=1 Xi=1 (cid:16) (cid:17) −(X1−δ3)(cid:0) n H X3i|Y3i−1,Z3i−1,F(cid:1)An\3,F3i−1,W[1:2] (=f)(1−δ3E)" n I X3i;W[1:2]|Y3i−1,Z3i−1,F3i−1,FAn\3 Xi=1 (cid:16) (cid:17) n Xi=1 (cid:16) (cid:17) n =(1−δ3) H X3i|W[1:2],FAn\3,Y3i−1,Z3i−1,F3i−1,F3ni − I X3i;W[1:2]|Z3i−1,F3i−1,FAn\3 # n "Xi=1 (cid:16) (cid:17) (≥g)Xin=(1R(cid:16)+R ) 1−δ3E −ǫ, (cid:17) − H X3i|Y3i−1,Z3i−1,FAn\3,F3i−1,W[1:2] 1 2 1−δ3Eδ3 # Xi=1 (cid:16) (cid:17) where: (i) the inequality in (a) follows since the entropy of n (=c)(1−δ ) H X |W ,Fn ,Yi−1,Zi−1,Fi−1 a discrete random variable is positive; (ii) the equality in (b) 3 3i [1:2] A\3 3 3 3 " is due to (1a) in Lemma 4; (iii) the equality in (c) is due Xi=1 (cid:16) (cid:17) n to the definition of mutual information; (iv) the equality in − H X3i|Y3i−1,Z3i−1,FAn\3,F3i−1,W[1:2] =0, (d) follows from the definition of mutual information; (v) Xi=1 (cid:16) (cid:17)# the inequality in (e) follows from the ‘conditioning reduces the entropy’ principle; (vi) the equality in (f) is due to the Third constraint. From (2a) and (2b) we get ne + nk3 δ3E definition of mutual information; (vii) finally, the inequality = H Yn|W ,Fn . Now, with this and by using Fano’s 3 [1:2] A in (g) follows by means of (1c) and (1d) in Lemma 4. inequality(keepinginmindthatthemessagesareindependent) (cid:0) (cid:1) Second constraints. From (2c) we have we have nkκ n(R +R )+ne+ nk3 n 1 2 δ 3E =δκE(1−δκ) H Xκi|Yκi−1,Zκi−1,Fκi−1,W[1:2],FAn\κ ≤I W[1:2];Y1n,Y2n,FAn +H Y3n|W[1:2],FAn +nǫ Xi=1 (cid:16) (cid:17) (a) (≥a)δ (1−δ ) n H X |Yi−1,Zi−1,Fi−1,W ,Fn ≤ I(cid:0)W[1:2];Y3n,FAn +(cid:1)H Y3n(cid:0)|W[1:2],FAn +(cid:1)nǫ κE κ Xi=1 (cid:16) κi κ κ κ [1:2] A\κ(cid:17) (=b)I(cid:0)W[1:2];Y3n|FAn(cid:1) +H(cid:0) Y3n|W[1:2],FAn(cid:1) +nǫ −H Yκn|W[1:2],FAn,Zκn (=c)H(cid:0)(Yn|Fn)+nǫ(cid:1) (cid:0) (cid:1) 3 A (b) = (1(cid:0)−δκE) (cid:1) (d) ≤ n(1−δ )+nǫ, n 3 I Yi−1,Fi−1;X |Zi−1,Fi−1,W ,Fn κ κ κi κ κ [1:2] A\κ where:(i)theinequalityin(a)followsbecauseoftheMarkov Xi=1 (cid:16) n (cid:17) chainW[1:2]−Y3n,FAn−Y1n,Y2n;(ii)theequalityin(b)follows (=c)(1−δκE) H Xκi|Zκi−1,Fκi−1,W[1:2],FAn\κ because of the independencebetween FAn and (W1,W2); (iii) "Xi=1 (cid:16) (cid:17) the equality in (c) follows from the definition of mutual n information; (iv) finally, the inequality in (d) follows since − H Xκi|Zκi−1,Fκi−1,W[1:2],FAn\κ,Yκi−1 node M receives at most n(1−δ3) packets. # Xi=1 (cid:16) (cid:17) Fourth constraint. From (1f) in Lemma 4 we have n (=d)(1−δ ) H X |Zi−1,Fi−1,Fn n κE "Xi=1 (cid:16) κi κ κ A\κ(cid:17) nRκ ≤(1−δκ) I Wκ;Xκi|FAn\κ,Yκi−1,Fκi−1 n Xi=1 (cid:16) (cid:17) − I X ;W |Zi−1,Fi−1,Fn n κi [1:2] κ κ A\κ (=a)(1−δ ) H X |Fn ,Yi−1,Fi−1 Xi=n1 (cid:16) (cid:17) κ "Xi=1 (cid:16) κi A\κ κ κ (cid:17) − H X |Zi−1,Fi−1,W ,Fn ,Yi−1 n κi κ κ [1:2] A\κ κ # − H X |Fn ,Yi−1,Fi−1,W (e)Xi=1 (cid:16) n (cid:17) Xi=1 (cid:16) κi A\κ κ κ κ(cid:17)# ≥ (1−δκE)"Xi=1H(cid:16)Xκi|Yκi−1,Zκi−1,Fκi−1,FAn\κ(cid:17) (≤b)(1−δκ)n n n − I Xκi;W[1:2]|Zκi−1,Fκi−1,FAn\κ −(1−δκ) H Xκi|FAn\κ,Yκi−1,Fκi−1,Wκ Xi=1 (cid:16) (cid:17) Xi=1 (cid:16) (cid:17) n (c) − H Xκi|Zκi−1,Fκi−1,W[1:2],FAn\κ,Yκi−1 ≤ (1−δκ)n # n Xi=1 (cid:16) n (cid:17) −(1−δκ) H Xκi|FAn\κ,Yκi−1,Fκi−1,W[1:2],Zκi−1 (=f)(1−δκE) I Xκi;W[1:2]|Yκi−1,Zκi−1,Fκi−1,FAn\κ Xi=1 (cid:16) (cid:17) n "Xi=1 (cid:16) (cid:17) (=d)(1−δκ)n− δkκ n, κE − I X ;W |Zi−1,Fi−1,Fn κi [1:2] κ κ A\κ # where: (i) the equality in (a) follows from the definition of Xi=1 (cid:16) (cid:17) (g) 1−δ mutual information; (ii) the inequality in (b) is due to the ≥ nRκ1−δ κδE −ǫ, fact that H(Xκi)≤1; (iii) the inequality in (c) is due to the κE κ ‘conditioning reduces the entropy’ principle; (iv) finally, the where: (i) the inequality in (a) follows since the entropy of equality in (d) follows from (2c). a discrete random variable is positive; (ii) the equality in (b) Fifthconstraint.ThenodeShasadiscretesourceofrandom- is due to (1a) in Lemma 4; (iii) the equality in (c) is due ness U such that 0 to the definition of mutual information; (iv) the equality in (d) follows from the definition of mutual information; (v) nD =H(U ) 0 0 the inequality in (e) follows from the ‘conditioning reduces (=a)H U |W ,Fn the entropy’ principle; (vi) the equality in (f) is due to the 0 [1:2] A\3 definitionofmutualinformation;(vii)finally,theinequalityin (b) (cid:16) (cid:17) (g) followsby meansbymeansof (1c) and (1d) in Lemma4. ≥ I U0;Y3n,Z3n,F3n|W[1:2],FAn\3 (cid:16) (cid:17)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.