ebook img

Information Leakage of Correlated Source Coded Sequences over Channel with an Eavesdropper PDF

0.4 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Information Leakage of Correlated Source Coded Sequences over Channel with an Eavesdropper

1 Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Reevana Balmahoon and Ling Cheng School of Electrical and Information Engineering University of the Witwatersrand Private Bag 3, Wits. 2050, Johannesburg, South Africa Email: [email protected], [email protected] 4 Abstract—Anewgeneralisedapproachformultiplecorrelated Wk KeyGenerator 1 sources over a wiretap network is investigated. A basic model 0 consisting of two correlated sources where each produce a 2 component of the common information is initially investigated. X,Y W XˆYˆ There are several cases that consider wiretapped syndromes Source Encoder Decoder t c on the transmission links and based on these cases a new O quantity, the information leakage at the source/s is determined. Aninterestingfeatureofthemodelsdescribedinthispaperisthe Wiretapper 2 informationleakagequantification.Shannon’sciphersystemwith Figure1. Yamamoto’sdevelopmentoftheShannonCipherSystem eavesdroppers is incorporated into the two correlated sources ] model to minimize key lengths. These aspects of quantifying T information leakage and reducing key lengths using Shannon’s I ciphersystemarealsoconsideredforamultiplecorrelatedsource The source sends information for the correlated sources, . s network approach. A new scheme that incorporates masking X and Y along the main transmission channel. A key W , is k c using common information combinations to reduce the key producedandusedbytheencoderwhenproducingthecipher- [ lengths is presented and applied to the generalised model for text. The wiretapper has access to the transmitted codeword, multiple sources. 2 W. The decoded codewords are represented by X(cid:98) and Y(cid:98). In 1 Yamamoto’s scheme the security level was also focused on v and found to be 1H(XK,YK|W) (i.e. the joint entropy of 4 I. INTRODUCTION K 6 X and Y given W, where K is the length of X and Y) when 2 Keeping information secure has become a major concern X and Y have equal importance, which is in accordance with 6 with the advancement in technology. In this work, the infor- traditional Shannon systems where the security is measured . 1 mation theory aspect of security is analyzed, as entropies are by the equivocation. When one source is more important than 0 used to measure security. The system also incorporates some theotherthenthesecuritylevelismeasuredbythepairofthe 4 traditional ideas surrounding cryptography, namely Shannon’s individual uncertainties (1H(XK|W), 1H(YK|W)). 1 K K cipher system and adversarial attackers in the form of eaves- In practical communication systems links are prone to : v droppers.Incryptographicsystems,thereisusuallyamessage eavesdropping and as such this work incorporates wiretapped i X in plaintext that needs to be sent to a receiver. In order channels, i.e. channels where an eavesdropper is present. to secure it, the plaintext is encrypted so as to prevent There are specific kinds of wiretapped channels that have r a eavesdroppersfromreadingitscontents.Thisciphertextisthen been developed. The mathematical model for this Wiretap transmitted to the receiver. Shannon’s cipher system (men- ChannelisgivenbyRouayhebetal.[3],andcanbeexplained tionedbyYamamoto[1])incorporatesthisidea.Thedefinition as follows: the channel between a transmitter and receiver of Shannon’s cipher system has been discussed by Hanawal is error-free and can transmit n symbols Y = (y ,...,y ) 1 n and Sundaresan [2]. In Yamamoto’s [1] development on this from which µ bits can be observed by the eavesdropper and model, a correlated source approach is introduced. This gives the maximum secure rate can be shown to equal n−µ bits. aninterestingviewoftheproblem,andisdepictedinFigure1. The security aspect of wiretap networks have been looked Correlated source coding incorporates the lossless compres- at in various ways by Cheng et al. [4], and Cai and Yeung sionoftwoormorecorrelateddatastreams.Correlatedsources [5], emphasising that it is of concern to secure these type of havetheabilitytodecreasethebandwidthrequiredtotransmit channels. and receive messages because a syndrome (compressed form Villard and Piantanida [6] also look at correlated sources oftheoriginalmessage)issentacrossthecommunicationlinks and wireap networks: A source sends information to the instead of the original message. A compressed message has receiver and an eavesdropper has access to information corre- more information per bit, and therefore has a higher entropy latedtothesource,whichisusedassideinformation.Thereis becausethetransmittedinformationismoreunpredictable.The a second encoder that sends a compressed version of its own unpredictability of the compressed message is also beneficial correlation observation of the source privately to the receiver. in terms of securing the information. Here, the authors show that the use of correlation decreases 2 therequiredcommunicationrateandincreasessecrecy.Villard XK YK et al. [7] explore this side information concept further where security using side information at the receiver and eavesdrop- Encoder Encoder perisinvestigated.Sideinformationisgenerallyusedtoassist the decoder to determine the transmitted message. An earlier work involving side information is that by Yang et al. [8]. TX TY The concept can be considered to be generalised in that the side information could represent a source. It is an interesting Decoder problem when one source is more important and Hayashi and Yamamoto [9] consider it in another scheme, where only X is secure against wiretappers and Y must be transmitted to XˆK,YˆK a legitimate receiver. They develop a security criterion based Figure2. Correlatedsourcecodingfortwosources on the number of correct guesses of a wiretapper to retrieve a message. In an extension of the Shannon cipher system, X Y Yamamoto[10]investigatedthesecretsharingcommunication system. VX VY In this case, we generalise a model for correlated sources across a channel with an eavesdropper and the security as- pect is explored by quantifying the information leakage and reducingthekeylengthswhenincorporatingShannon’scipher VCX VCY system. Thispaperinitiallydescribesatwocorrelatedsourcemodel TX=(VX,VCX) TY =(VY,VCY) across wiretapped links, which is detailed in Section II. In Figure3. Therelationbetweenprivateandcommoninformation SectionIII,theinformationleakageisinvestigatedandproven forthis twocorrelatedsourcemodel. Theinformationleakage is quantified to be the equivocation subtracted from the total The decoder determines X and Y only after receiving all of obtained uncertainty. In Section IV the two correlated sources TX andTY.Thecommoninformationbetweenthesourcesare modelislookedataccordingtoShannon’sciphersystem.The transmitted through the portions VCX and VCY. In order to notationcontainedinthetableswillbeclarifiedinthefollow- decode a transmitted message, a source’s private information ingsections.TheproofsforthisShannonciphersystemaspect and both common information portions are necessary. This are detailed in Section V. Section VI details the extension aids in security as it is not possible to determine, for example of the two correlated source model where multiple correlated X by wiretapping all the contents transmitted along X’s sources in a network scenario is investigated. There are two channel only. This is different to Yamamoto’s [1] model as subsections here; one quantifying information leakage for the here the common information consists of two portions. The Slepian-Wolf scenario and the other incorporating Shannon’s aim is to keep the system as secure as possible and these ciphersystemwherekeylengthsareminimizedandamasking followingsectionsshowhowitisachievedbythisnewmodel. methodtosaveonkeysispresented.SectionVIIexplainshow We assume that the function F is a one-to-one process the models detailed in this paper are a generalised model of with high probability, which means based on TX and TY we Yamamoto’s[1]model,andfurtherofferscomparisontoother can retrieve XK and YK with minimal error. Furthermore, it models.ThefutureworkforthisresearchisdetailedinSection reachestheSlepian-Wolfbound,H(TX,TY)=H(XK,YK). VIII and the paper is concluded in Section IX. Here, we note that the lengths of TX and TY are not fixed, as it depends on the encoding process and nature of the Slepian- II. MODEL Wolf codes. The process is therefore not ideally one-to-one and reversible and is another difference between our model The independent, identically distributed (i.i.d.) sources X and Yamamoto’s [1] model. and Y are mutually correlated random variables, depicted The code described in this section satisfies the following in Figure 2. The alphabet sets for sources X and Y are representedbyX andY respectively.Assumethat(XK,YK) inequalities for δ >0 and sufficiently large K. are encoded into two syndromes (T and T ). We can write X Y TX =(VX,VCX)andTY =(VY,VCY)whereTX andTY are Pr{XK (cid:54)=G(VX,VCX,VCY)}≤δ (1) thesyndromesofX andY.Here,T andT arecharacterised X Y by(V ,V )and(V ,V )respectively.TheVenndiagram X CX Y CY in Figure 3 easily illustrates this idea where it is shown that Pr{YK (cid:54)=G(V ,V ,V )}≤δ (2) Y CX CY V and V represent the private information of sources X X Y and Y respectively and V and V represent the common CX CY information between XK and YK generated by XK and YK H(V ,V ,V )≤H(XK)+δ (3) X CX CY respectively. The correlated sources X and Y transmit messages (in the form of syndromes) to the receiver along wiretapped links. H(V ,V ,V )≤H(YK)+δ (4) Y CX CY 3 1 1 H(VX,VY,VCX,VCY)≤H(XK,YK)+δ (5) H(Y|X)−(cid:15)0 ≤ H(WY)≤ logMY ≤H(Y|X)+(cid:15)0 (12) K K H(XK|V ,V )≥H(V )+H(V )−δ (6) 1 X Y CX CY I(X;Y)−(cid:15) ≤ (H(W )+H(W )) 0 CX CY K 1 ≤ (logM +logM )≤I(X;Y)+(cid:15) (13) CX CY 0 H(XK|V ,V )≥H(V )+H(V )−δ (7) K CX CY X CY 1 H(XK|W )≥H(X)−(cid:15) (14) H(XK|V ,V ,V )≥H(V )+H(V )−δ (8) K Y 0 CX CY Y X CY 1 H(VCX)+H(VX)−δ ≤H(XK|VCY,VY) KH(YK|WX)≥H(Y)−(cid:15)0 (15) ≤H(X)−H(V )+δ (9) We can see that (11) - (13) mean CY 1 H(X,Y)−3(cid:15) ≤ (H(W )+H(W )+H(W ) 0 X Y CX where G is a function to define the decoding process at the K receiver.Itcanintuitivelybeseenfrom(3)and(4)thatX and + H(WCY)) Y are recovered from the corresponding private information ≤ H(X,Y)+3(cid:15) (16) 0 and the common information produced by XK and YK. Hence from (10), (16) and the ordinary source coding Equations(3),(4)and(5)showthattheprivateinformationand theorem, (W , W , W , W ) have no redundancy for common information produced by each source should contain X Y CX CY sufficiently small (cid:15) ≥ 0. It can also be seen that W and no redundancy. It is also seen from (7) and (8) that V is 0 X Y W are independent of YK and XK respectively. independent of XK asymptotically. Here, V , V , V and Y X Y CX Proof of Lemma 1: V are disjoint, which ensures that there is no redundant CY As seen by Slepian and Wolf, mentioned by Yamamoto information sent to the decoder. [1] there exist M codes for the P (y|x) DMC (discrete TorecoverX thefollowingcomponentsarenecessary:V , X Y|X X memorylesschannel)andM codesfortheP (x|y)DMC. V andV .ThiscomesfromthepropertythatXK cannot Y X|Y CX CY ThecodewordsetsexistasCX andCY,whereCX isasubset be derived from V and V only and part of the common i j i X CX of the typical sequence of XK and CY is a subset of the information between XK and YK is produced by YK. j typical sequence of YK. The encoding functions are similar, Yamamoto [1] proved that a common information be- but we have created one decoding function as there is one tween XK and YK is represented by the mutual information decoder at the receiver: I(X;Y). Yamamoto [1] also defined two kinds of common information. The first common information is defined as the rate of the attainable minimum core VC (i.e. VCX,VCY in fXi :IMCX →CiX (17) this model) by removing each private information, which is independent of the other information, from (XK, YK) as much as possible. The second common information is defined f :I →CY (18) astherateoftheattainablemaximumcoreV suchthatifwe Yj MCY j C lose V then the uncertainty of X and Y becomes H(V ). C C Here,weconsiderthecommoninformationthatVCX andVCY g :XK,YK →I ×I (19) represent. MCX MCY We begin demonstrating the relationship between the com- The relations for MX, MY and the common information mon information portions by constructing the prototype code remain the same as per Yamamoto’s and will therefore not be (W , W , W , W ) as per Lemma 1. proven here. X Y CX CY Lemma 1: For any (cid:15)0 ≥ 0 and sufficiently large K, In this scheme, we use the average (VCX,VX,VCY,VY) there exits a code W = F (XK), W = F (YK), transmitted for many codewords from X and Y. Thus, at any X X Y Y WCX = FCX(XK), WCY = FCY(YK), X(cid:98)K,Y(cid:98)K = time either VCX or VCY is transmitted. Over time, the split G(W ,W ,W ,W ), where W ∈ I , W ∈ I , between which common information portion is transmitted X Y CX CY X MX Y MY W ∈ I , W ∈ I for I , which is defined as is determined and the protocol is prearranged accordingly. CX MCX CY MCY Mα {0,1,...,M −1}, that satisfies, Therefore all the common information is either transmitted α as l or m, and as such Yamamoto’s encoding and decoding method may be used. Pr{X(cid:98)K,Y(cid:98)K (cid:54)=XK,YK}≤(cid:15) (10) As per Yamamoto’s method the code does exist and that W and W are independent of Y and X respectively, as X Y 1 1 shown by Yamamoto [1]. H(X|Y)−(cid:15) ≤ H(W )≤ logM ≤H(X|Y)+(cid:15) (11) 0 X X 0 K K 4 The common information is important in this model as and the sum of VCX and VCY represent a common information 1 H(V |YK)≤H(X|Y)+δ (28) between the sources. The following theorem holds for this X K common information: From (25), (27) and (28) we get Theorem 1: 1 1 [H(V )+H(V )] ≥ H(X,Y)−H(X|Y)−H(Y|X) [H(VCX)+H(VCY)]=I(X;Y) (20) K CX CY K − δ −4δ 1 where V is the common portion between X and Y pro- CX = I(X;Y)−δ −4δ (29) ducedbyXK andV isthecommonportionbetweenX and 1 CY Y produced by YK. It is noted that the (20) holds asymptoti- It is possible to see from (13) that H(V )+H(V ) ≤ CX CY cally, and does not hold with equality when K is finite. Here, I(X;Y). From this result, (19) and (29), and as δ → 0 and 1 we show the approximation when K is infinitely large. The δ →0 it can be seen that privateportionsforXK andYK arerepresentedasVX andVY 1 [H(V +H(V )]=I(X;Y) (30) respectively. As explained in Yamamoto’s [1] Theorem 1, two CX CY K types of common information exist (the first is represented by I(X;Y) and the second by min(H(XK),H(YK)). We will This model can cater for a scenario where a particular develop part of this idea to show that the sum of the common source, say X needs to be more secure than Y (possibly information portions produced by XK and YK in this new because of eavesdropping on the X channel). In such a model is represented by the mutual information between the case, the 1H(V ) term in (29) needs to be as high as sources. K CX possible. When this uncertainty is increased then the security Proof of Theorem 1: The first part is to prove that of X is increased. Another security measure that this model H(V )+H(V ) ≥ I(X;Y), and is done as follows. We CX CY incorporatesisthatX cannotbedeterminedfromwiretapping weaken the conditions (1) and (2) to only X’s link. Pr {XK,YK (cid:54)=G (V ,V ,V ,V })≤δ (21) XY X Y CX CY 1 III. INFORMATIONLEAKAGE For any (V ,V , V , V ) ∈C(3(cid:15) ) (which can be seen X Y CX CY 0 In order to determine the security of the system, a measure from(16)),wehavefrom(21)andtheordinarysourcecoding for the amount of information leaked has been developed. theorem that This is a new notation and quantification, which emphasizes the novelty of this work. The obtained information and total 1 H(XK,YK)−δ ≤ H(V ,V ,V ,V ) uncertainty are used to determine the leaked information. 1 X Y CX CY K Information leakage is indicated by LP. Here P indicates the 1 Q ≤ [H(V )+H(V )+H(V ) source/s for which information leakage is being quantified, X Y CX K P = {S ,...,S } where n is the number of sources (in + H(V )] (22) 1 n CY this case, n = 2). Further, Q indicates the syndrome portion where δ1 →0 as δ →0. From Lemma 1, that has been wiretapped, Q={V1,...,Vm} where m is the number of codewords (in this case, m=4). 1 1 H(V |XK)≥ H(V )−δ (23) The information leakage bounds are as follows: Y Y K K LXK ≤H(XK)−H(V )−H(V )+δ (31) VX,VY CX CY 1 1 H(V |YK)≥ H(V )−δ (24) X X K K From (22) - (24), LXVCKX,VCY ≤H(XK)−H(VX)−H(VCY)+δ (32) 1 1 [H(V )+H(V )] ≥ H(X,Y)− H(V ) CX CY X K K − 1 H(V )−δ LXVCKX,VCY,VY ≤H(XK)−H(VX)−H(VCY)+δ (33) Y 1 K 1 ≥ H(X,Y)− KH(VX|Y) H(VCY)−δ ≤LXVYK,VCY − 1 H(VY|X)−δ1−2δ(25) ≤ H(XK)−H(VCX)−H(VX)+δ (34) K Here, V is private information of source YK and is inde- On the other hand, we can see that Y pendent of XK and therefore does not leak any information 1 H(XK,V )≤H(X,Y)+δ (26) about XK, shown in (32) and (33). Equation (34) gives an Y K indication of the minimum and maximum amount of leaked This implies that informationfortheinterestingcasewhereasyndromehasbeen 1 wiretappedanditsinformationleakageonthealternatesource H(VY|XK)≤H(Y|X)+δ (27) isquantified.Theoutstandingcommoninformationcomponent K 5 is the maximum information that can be leaked. For this case, Proof for (7): the common information V and V can thus consist of CX CY 1 added protection to reduce the amount of information leaked. H(XK|VCX,VCY) K These bounds developed in (31) - (34) are proven in the next 1 section. = [H(XK,VCX,VCY)−H(VCX,VCY)] K The proofs for the above mentioned information leakage 1 = [H(XK,V )−H(V ,V )] (40) inequalities are now detailed. First, the inequalities in (6) - K CY CX CY (9)willbeproven,soastoprovethattheinformationleakage 1 = [H(XK)−H(V )+I(X;V )+H(V |XK)] equations hold. K CY CY CY 1 − [H(V |V )+I(V ;V )+H(V |V )] CX CY CX CY CY CX Lemma 2: The code (V , V , V , V ) defined at K X CX CY Y + δ the beginning of Section I, describing the model and (1) - 1 1 (5) satisfy (6) - (9). Then the information leakage bounds are = [H(XK)−H(V )+H(V )−H(V )−H(V )] CY CY CX CY K given by (31) - (34). + δ (41) 1 1 Proof for (6): = [H(XK)−H(V )−H(V )]+δ CY CX 1 K 1 1 H(XK|V ,V ) ≥ K[H(VX)+H(VCX)+H(VCY)−H(VCY)−H(VCX)]−δ X Y K 1 = 1 [H(XK,V ,V )−H(V ,V )] = KH(VX)+δ1−δ (42) X Y X Y K = 1 [H(XK,V )−H(V ,V )] (35) where (40) holds because VCX is a function of XK and Y X Y (41) holds because X is independent of V asymptotically K CY 1 and V is independent of V asymptotically. = [H(XK|V )+I(XK;V )+H(V |XK)] CX CY K Y Y Y The proof for H(X|VCX,VCY,VY) is similar to that for 1 H(X|V ,V ), because V is independent of X. − [H(V |V )+I(V ;V )+H(V |V )] CX CY Y X Y X Y Y X K 1 Proof for (8): = [H(XK|V )+H(V |XK)−H(V |V ) Y Y X Y K 1 −H(V |V )] H(XK|V ,V ,V ) Y X CX CY Y K 1 = K[H(XK)+H(VY)−H(VX)−H(VY)] (36) = 1 H(XK|VCX,VCY) (43) K 1 = K[H(XK)−H(VX)] = 1 [H(XK,VCX,VCY)−H(VCX,VCY)] K 1 ≥ K[H(VX)+H(VCX)+H(VCY)−H(VX)]−δ = 1 [H(XK,VCY)−H(VCX,VCY)] (44) K 1 = K[H(VCX)+H(VCY)]−δ (37) = 1 [H(XK)−H(VCY)+I(X;VCY)+H(VCY|XK)] K 1 − [H(V |V )+I(V ;V )+H(V |V )] where (35) holds because VX is a function of X and (36) K CX CY CX CY CY CX holdsbecauseX isindependentofV asymptoticallyandV + δ Y X 1 is independent of V asymptotically. 1 Y = [H(XK)−H(V )+H(V )−H(V ) CY CY CX For the proofs of (7) and (8), the following simplification K for H(X|VCY) is used: − H(VCY)]+δ1 (45) 1 = [H(XK)−H(V )−H(V )]+δ CY CX 1 H(XK|V ) = H(XK,YK)−H(V ) K CY CY 1 = H(XK)+H(V )−I(X;V )−H(V ) ≥ [H(VX)+H(VCX)+H(VCY)−H(VCY) CY CY CY K = H(XK)+H(VCY)−H(VCY)−H(VCY) − −H(VCX)]−δ+δ1 1 + δ1 (38) = H(VX)−δ+δ1 (46) K = H(XK)−H(V )=δ (39) CY 1 where (43) holds because V and XK are independent, Y (44) holds because V is a function of XK and (45) holds CX where I(X;VCY) approximately equal to H(VCY) in (38) because XK is independent of V asymptotically and V CY CX can be seen intuitively from the Venn diagram in Figure is independent of V asymptotically. CY 3. Since it is an approximation, δ , which is smaller than 1 For the proof of (9), we look at the following probabilities: δintheproofsbelowhasbeenaddedtocaterforthetolerance. Pr{V ,V (cid:54)=G(T )}≤δ (47) X CX X 6 which proves (32). Pr{V ,V (cid:54)=G(T )}≤δ (48) Y CY Y LXK = H(XK)−H(XK|V ,V ,V ) VCX,VCY,VY CX CY Y 1 H(XK|T ) ≤ H(XK)−H(VX)+δ (58) Y K 1 which proves (33). ≤ H(XK,V ,V )]+δ (49) K CY Y The two bounds for H(VCY,VY) are given by (52) and 1 (55). From (52): = [H(XK,V ,V )−H(V ,V )]+δ CY Y CY Y K 1 = K[H(XK,VY)−H(VCY,VY)]+δ (50) LXVYK,VCY ≥ H(XK)−[H(X)−H(VCY)+δ] 1 = [H(XK|VY)+I(XK;VY)+H(VY|XK)] ≥ H(VCY)−δ (59) K 1 and from (55): − [H(V |V )+I(V ;V )+H(V |V )]+δ CY Y CY Y Y CY K 1 LXK ≤ H(XK)−(H(V )+H(V )−δ) = [H(XK)+H(VY)−H(VCY)−H(VY)]+δ (51) VY,VCY X CX K ≤ H(XK)−H(V )−H(V )+δ (60) 1 X CX = [H(XK)−H(V )]+δ (52) CY K Combining these results from (59) and (60) gives (34). where (49) holds from (48), (50) holds because V and CY V are asymptotically independent. Furthermore, (51) holds beYcauseV andV areasymptoticallyindependentandXK IV. SHANNON’SCIPHERSYSTEM CY Y and V are asymptotically independent. Here,wediscussShannon’sciphersystemfortwoindepen- Y Following a similar proof to those done above in this dentcorrelatedsources(depictedinFigure4).Thetwosource section, another bound for H(XK|VCY,VY) can be found as outputs are i.i.d random variables X and Y, taking on values follows: in the finite sets X and Y. Both the transmitter and receiver 1 haveaccesstothekey,arandomvariable,independentofXK H(XK|V ,V ) K CY Y and YK and taking values in I = {0,1,2,...,M −1}. Mk k = 1 [H(XK,V ,V )−H(V ,V )] The sources XK and YK compute the ciphertexts X(cid:48) and CY Y CY Y K Y(cid:48),whicharetheresultofspecificencryptionfunctionsonthe = 1 [H(XK,V )−H(V ,V )] (53) plaintextfromX andY respectively.Theencryptionfunctions K Y CY Y are invertible, thus knowing X(cid:48) and the key, XK can be 1 = [H(XK|V )+I(XK;V )+H(V |X)] retrieved. Y Y Y K Themutualinformationbetweentheplaintextandciphertext 1 − [H(V |V )+I(V ;V )+H(V |V )] should be small so that the wiretapper cannot gain much in- CY Y CY Y Y CY K formation about the plaintext. For perfect secrecy, this mutual 1 = [H(XK)+H(V )−H(V )−H(V )] (54) information should be zero, then the length of the key should Y CY Y K be at least the length of the plaintext. 1 = [H(XK)−H(V )] CY K XK YK 1 ≥ [H(V )+H(V )+H(V )−H(V )]−δ X CX CY CY K = 1 [H(V )+H(V )]−δ (55) k k X CX Encoder Encoder K where (53) and (54) hold for the same reason as (50) and (51) respectively. X0 Y0 Since we consider the information leakage as the total informationobtainedsubtractedfromthetotaluncertainty,the following hold for the four cases considered in this section: k Decoder LXK = H(XK)−H(XK|V ,V ) VX,VY X Y ≤ H(XK)−H(V )−H(V )+δ (56) CX CY XˆK,YˆK which proves (31). Figure4. Shannonciphersystemfortwocorrelatedsources LXK = H(XK)−H(XK|V ,V ) VCX,VCY CX CY The encoder functions for X and Y, (EX and EY respec- ≤ H(XK)−H(V )+δ (57) tively) are given as: X 7 EX :XK ×IMkX → IMX(cid:48) ={0,1,...,MX(cid:48) −1} 1 H(XK,YK|W1)≤hXY +(cid:15) (77) I ={0,1,...,M(cid:48) −1(}61) K MC(cid:48)X CX 1 EY :YK ×IMkY → IMY(cid:48) ={0,1,...,MY(cid:48) −1} KH(XK,YK|W2)≤hXY +(cid:15) (78) I ={0,1,...,M(cid:48) −1}(62) MC(cid:48)Y CY where RX is the the rate of source X’s channel and RY The decoder is defined as: is the the rate of source Y’s channel. Here, R is the rate kX of the key channel at XK and R is the rate of the key kY channel at YK. The security levels, which are measured by D :(I ,I ,I ,I ) × I ,I XY MX(cid:48) MY(cid:48) MC(cid:48)X MC(cid:48)Y MkX MkY the total and individual uncertainties are hXY and (hX,hY) → XK ×YK (63) respectively. The encoder and decoder mappings are below: The cases 1 - 5 are: W1 =FEX(XK,WkX) (64) Case 1: When TX and TY are leaked and both XK and YK need to be kept secret. Case 2: When T and T are leaked and XK needs to be X Y W =F (YK,W ) (65) kept secret. 2 EY kY Case 3: When T is leaked and both XK and YK need to X be kept secret. Case 4: When T is leaked and YK needs to be kept secret. X(cid:98)K =FDX(W1,W2,WkX) (66) Case 5: When TX is leaked and XK needs to be kept secret. X where T is the syndrome produced by X, containing V X CX and V and T is the syndrome produced by Y, containing Y(cid:98)K =FDY(W1,W2,WkY) (67) VCY aXnd VX .Y or The admissible rate region for each case is defined as follows: (X(cid:98)K,Y(cid:98)K)=FDXY(W1,W2,WkX,WkY) (68) Definition 1a: (RX, RY, RkX, RkY, hXY) is admissible for case 1 if there exists a code (F , F ) and (F , The following conditions should be satisfied for cases 1- 4: EX DXY EY F ) such that (69) - (74) and (78) hold for any (cid:15)→0 and DXY sufficiently large K. 1 logMX ≤RX +(cid:15) (69) Definition 1b: (RX, RY, RkX, RkY, hX) is admissible for K case 2 if there exists a code (F , F ) such that (69) - EX DXY (75) hold for any (cid:15)→0 and sufficiently large K. 1 Definition 1c: (R , R , R , R , h , h ) is admissible logM ≤R +(cid:15) (70) X Y kX kY X Y K Y Y for case 3 if there exists a code (F , F ) and (F , EX DXY EY F ) such that (69) - (74) and (76), (78) hold for any DXY 1 (cid:15)→0 and sufficiently large K. logMkX ≤RkX +(cid:15) (71) Definition 1d: (R , R , R , R , h ) is admissible for K X Y kX kY Y case 4 if there exists a code (F , F ) such that (69) - EX DXY (74) and (76) hold for any (cid:15)→0 and sufficiently large K. 1 logM ≤R +(cid:15) (72) Definition 1e: (R , R , R , R , h ) is admissible for kY kY X Y kX kY X K case 5 if there exists a code (F , F ) such that (69) - EX DXY (74) and (75) hold for any (cid:15)→0 and sufficiently large K. Pr{X(cid:98)K (cid:54)=XK}≤(cid:15) (73) Definition 2: The admissible rate regions of Rj and of Rk are defined as: Pr{Y(cid:98)K (cid:54)=YK}≤(cid:15) (74) R (h )={(R ,R ,R ,R ): 1 XY X Y kX kY (R ,R ,R ,R ,h ) is admissible for case 1} (79) X Y kX kY XY 1 H(XK|W )≤h +(cid:15) (75) 1 X K R (h )={(R ,R ,R ,R ): 1 2 X X Y kX kY KH(YK|W2)≤hY +(cid:15) (76) (RX,RY,RkX,RkY,hX) is admissible for case 2} (80) 8 V. PROOFOFTHEOREMS2-5 This section initially proves the direct parts of Theorems 2 R (h ,h )={(R ,R ,R ,R ): 3 X Y X Y kX kY - 5 and thereafter the converse parts. (R ,R ,R ,R ,h ,h ) is admissible for case 3} (81) X Y kX kY X Y A. Direct parts All the channel rates in the theorems above are in accor- R (h )={(R ,R ,R ,R ): 4 Y X Y kX kY dance with Slepian-Wolf’s theorem, hence there is no need to (RX,RY,RkX,RkY,hY) is admissible for case 4} (82) prove them. We construct a code based on the prototype code (W ,W ,W ,W ) in Lemma 1. In order to include a X Y CX CY key in the prototype code, W is divided into two parts as X per the method used by Yamamoto [1]: R (h )={(R ,R ,R ,R ): 5 X X Y kX kY (R ,R ,R ,R ,h ) is admissible for case 5} (83) W =W mod M ∈I ={0,1,2,...,M −1} (88) X Y kX kY X X1 X X1 MX1 X1 Theorems for these regions have been developed: Theorem 2: For 0≤hXY ≤H(X,Y), W = WX −WX1 ∈I ={0,1,2,...,M −1} (89) X2 M MX2 X2 X1 R (h )={(R ,R ,R ,R ): 1 XY X Y kX kY where M is a given integer and M is the ceiling X1 X2 R ≥H(X|Y), X of M /M . The M /M is considered an integer for X X1 X X1 RY ≥H(Y|X), simplicity, because the difference between the ceiling value R +R ≥H(X,Y) and the actual value can be ignored when K is sufficiently X Y large. In the same way, W is divided: R ≥h and R ≥h } (84) Y kX XY kY XY Theorem 3: For 0≤h ≤H(X), X W =W mod M ∈I ={0,1,2,...,M −1} (90) Y1 Y Y1 MY1 Y1 R (h )={(R ,R ,R ,R ): 2 X X Y kX kY RX ≥H(X|Y), W = WY −WY1 ∈I ={0,1,2,...,M −1} (91) R ≥H(Y|X), Y2 M MY2 Y2 Y Y1 R +R ≥H(X,Y) The common information components W and W X Y CX CY R ≥h and R ≥h } (85) are already portions and are not divided further. It can be kX X kY Y shown that when some of the codewords are wiretapped the Theorem 4: For 0≤h ≤H(X) and 0≤h ≤H(Y), uncertainties of XK and YK are bounded as follows: X Y R (h ,h )={(R ,R ,R ,R ): 3 X Y X Y kX kY 1 1 RX ≥H(X|Y), KH(XK|WX2,WY)≥I(X;Y)+ K logMX1−(cid:15)(cid:48)0 (92) R ≥H(Y|X), Y R +R ≥H(X,Y) 1 1 X Y H(YK|W ,W )≥I(X;Y)+ logM −(cid:15)(cid:48) (93) R ≥h and R ≥h } (86) K X Y2 K Y1 0 kX X kY Y Theorem 5: For 0≤hX ≤H(X), 1 H(XK|W ,W )≥I(X;Y)−(cid:15)(cid:48) (94) K X Y2 0 R (h ,h )={(R ,R ,R ,R ): 5 X Y X Y kX kY R ≥H(X|Y), X 1 1 RY ≥H(Y|X), KH(XK|WX,WY,WCY)≥ K logMCX −(cid:15)(cid:48)0 (95) R +R ≥H(X,Y) X Y R ≥h and R ≥0} (87) 1 1 kX X kY H(YK|W ,W ,W )≥ logM −(cid:15)(cid:48) (96) K X Y CY K CX 0 When h =0 then case 5 can be reduced to that depicted X in (86). Hence, Corollary 1 follows: 1 1 Corollary 1: R4(hY)=R3(0,hY) KH(XK|WY,WCY)≥H(X|Y)+ K logMCX −(cid:15)(cid:48)0 (97) The security levels, which are measured by the total and individual uncertainties h and (h ,h ) respectively give XY X Y 1 1 an indication of the level of uncertainty in knowing certain H(YK|W ,W )≥ logM −(cid:15)(cid:48) (98) K Y CY K CX 0 information. When the uncertainty increases then less infor- mationisknowntoaneavesdropperandthereisahigherlevel where (cid:15)(cid:48) → 0 as (cid:15) → 0. The proofs for (92) - (98) are 0 0 of security. the same as per Yamamoto’s [1] proof in Lemma A1. The 9 difference is that W , W , M and M are described ThecodewordsW andW andtheirkeysW andW CX CY CX CY X Y kX kY as W , W , M and M respectively by Yamamoto. are now defined: C1 C2 C1 C2 Here, we consider that W and W are represented by CX CY Yamamoto’sW andW respectively.Inadditionthereare C1 C2 W =(W ⊕W ,W ⊕W ,W ⊕W )(106) some more inequalities considered here: X X1 kCY X2 kCY CX kCY 1 1 H(YK|W ,W ,W ,W ) ≥ logM X CX CY Y2 Y1 K K W =(W ⊕W ,W ⊕W ,W ) (107) Y Y1 kCY Y2 kCY CY − (cid:15)(cid:48) (99) 0 1 H(YK|W ,W ,W ) ≥ 1 logM WkY =(WkCY) (108) X CX CY Y1 K K where W ∈ I = {0,1,...,M −1}. The wiretapper 1 α Mα α + logM −(cid:15)(cid:48)(100) will not know W , W and W from W and W , K Y2 0 X1 X2 CX X Y1 W and W from W as these are protected by the key Y2 CY Y (W . kCY 1 1 H(XK|WX2,WCY) ≥ logMX1 In this case, RX, RY, RkX and RkY satisfy from (11) - K K (13) and (103) - (105), that 1 + logM −(cid:15)(cid:48) (101) K CX 0 1 1 1 logM + logM = (logM +logM X Y X1 X2 K K K 1 1 H(YK|W ,W ) ≥ logM 1 K X2 CY K Y1 + logMCX)+ (logMY1 K 1 1 + logMY2+ logMCX + logMY2+logMCY) K K ≤ H(X|Y)+H(Y|X) − (cid:15)(cid:48) (102) 0 + I(X;Y)+(cid:15) 0 The inequalities (99) and (100) can be proved in the same = H(X,Y) way as per Yamamoto’s [1] Lemma A2, and (101) and (102) canbeprovedinthesamewayasperYamamoto’s[1]Lemma ≤ RX +RY (109) A1. Foreachproofweconsidercaseswhereakeyalreadyexists 1 1 foreitherV orV andtheencryptedcommoninformation logM = logM CX CY kX CY K K portionisthenusedtomasktheotherportions(eitherV or CX = h (110) V andtheprivateinformationportions).Therearetwocases XY CY ≤ R (111) considered for each; firstly, when the common information kX portion entropy is greater than the entropy of the portion where (110) comes from (105). that needs to be masked, and secondly when the common information portion entropy is less than the entropy of the 1 1 portion to be masked. For the latter case, a smaller key will logM = logM kY CY K K need to be added so as to cover the portion entirely. This = h (112) XY has the effect of reducing the required key length, which is ≤ R (113) explained in greater detail in Section VII. kY Proof of Theorem 2: Suppose that (RX, RY, RKX, where (112) comes from (105). RKY)∈R1 forhXY ≤H(X,Y).Withoutlossofgenerality, The security levels thus result: we assume that h ≤h . Then, from (84) X Y 1 1 H(XK|W ,W ) = H(X|W ⊕W , RX ≥H(XK|YK) K X Y K X1 kCY R ≥H(YK|XK) WX2⊕WkCY,WCX ⊕WkCY, Y R +R ≥H(XK,YK) (103) WY1⊕WkCY,WY2⊕WkCY, X Y W ) CY = H(XK) (114) R ≥h ,R ≥h (104) kX XY kY XY ≥ h −(cid:15)(cid:48) (115) X 0 Assuming a key exists for V . For the first case, consider CY where (114) holds because W , W , W , W , W the following: H(V ) ≥ H(V ), H(V ) ≥ H(V ) and X1 X2 CX Y1 Y2 CY X CY Y are covered by key W and W is covered by an existing H(V )≥H(V ). CY CY CY CX random number key. Equations (10) - (16) imply that W , X1 W ,W andW havealmostnoredundancyandtheyare X2 Y1 Y2 M =2KhXY (105) mutually independent. CY 10 Similarly, 1 1 1 H(YK|W ,W )≥h −(cid:15)(cid:48) (116) logMkX = [logMk3+logMk4+logMkCY] K X Y Y 0 K K = logM +logM +logM kCY 3 kCY Therefore (R , R , R , R , h , h ) is admissible X Y kX kY XY XY + logM +logM from (109) - (116). 4 kCY = 3logM +logM +logM (124) Next the case where: H(V ) < H(V ), H(V ) < kCY 3 4 CY X CY H(V ) and H(V ) < H(V ) is considered. Here, there ≥ 3h −(cid:15) (125) Y CY CX XY 0 are shorter length keys used in addition to the key provided ≥ h (126) XY by W in order to make the key lengths required by the CY individual portions. For example the key W comprises where (125) results from (105). k1 W and a short key W , which together provide the length The security levels thus result: kCY 1 of WX1. The codewords WX and WY and their keys WkX 1 H(XK|W ,W ) = 1 H(X|W ⊕W ,W ⊕W , and WkY are now defined: K X Y K X1 k1 X2 k2 W ⊕W , CX k3 W ⊕W ,W ⊕W , W =(W ⊕W ,W ⊕W ,W ⊕W ) (117) Y1 k4 Y2 k5 X X1 k1 X2 k2 CX k3 W ) CY = H(XK) (127) WY =(WY1⊕Wk4,WY2⊕Wk5,WCY) (118) ≥ hX −(cid:15)(cid:48)0 (128) where (114) holds because W , W , W , W , W X1 X2 CX Y1 Y2 are covered by key W and some shorter length key and CY WkX =(Wk1,Wk2,Wk3) (119) WCY is covered by an existing random number key. Similarly, 1 H(YK|W ,W )≥h −(cid:15)(cid:48) (129) WkY =(Wk4,Wk5) (120) K X Y Y 0 Therefore (R , R , R , R , h , h ) is admissible where W ∈ I = {0,1,...,M −1}. The wiretapper X Y kX kY XY XY α Mα α from (121) - (129). will not know W , W and W from W and W , X1 X2 CX X Y1 W and W from W as these are protected by the key Y2 CY Y Theorem 3 - 5 are proven in the same way with varying (W . kCY codewords and keys. The proofs follow: In this case, R , R , R and R satisfy that X Y kX kY Theorem 3 proof: The consideration for the security levels is that h ≥ h Y X 1 1 1 because Y contains the key the is used for masking. Suppose logM + logM = (logM +logM X Y X1 X2 K K K that (R , R , R , R ) ∈ R . From (85) X Y KX KY 2 1 + logMCX)+ K(logMY1 RX ≥H(XK|YK) + logMY2+logMCY) RY ≥H(YK|XK) ≤ H(X|Y)+H(Y|X) R +R ≥H(XK,YK) (130) X Y + I(X;Y)+(cid:15) 0 = H(X,Y) R ≥h ,R ≥h (131) ≤ R +R (121) kX X kY Y X Y Assuming a key exists for V . For the first case, consider CY the following: H(V ) ≥ H(V ), H(V ) ≥ H(V ) and CY X CY Y 1 1 H(V )≥H(V ). logM = [logM +logM +logM ] CY CX kX k1 k2 k3 K K = logM +logM kCY 1 M =2KhY (132) + logM +logM CY kCY 2 + logM +logM ThecodewordsW andW andtheirkeysW andW kCY 3 X Y kX kY are now defined: = 3logM +logM kCY 1 + logM +logM 2 3 ≥ 3hXY −(cid:15)0 (122) WX =(WX1⊕WkCY,WX2⊕WkCY,WCX ⊕WkCY)(133) ≥ h (123) XY where (122) results from (105). W =(W ⊕W ,W ⊕W ,W ) (134) Y Y1 kCY Y2 kCY CY

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.