ebook img

No-signaling Quantum Key Distribution: Solution by Linear Programming PDF

0.48 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview No-signaling Quantum Key Distribution: Solution by Linear Programming

No-signaling Quantum Key Distribution: A Straightforward Approach Won-Young Hwang1,2 ∗, Joonwoo Bae3, and Nathan Killoran2 1Department of Physics Education, Chonnam National University, Gwangju 500-757, Republic of Korea 2 Institute for Quantum Computing and Department of Physics & Astronomy, University of Waterloo, Waterloo N2L 3G1, Canada 3 Centre for Quantum Technologies, National University of Singapore, Singapore 117542, Singapore We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints. Assuming an individual attack, we consider all possible joint probabilities. First we 2 suppose temporarily that Eve (an eavesdropper) is restricted to binary outcomes. We impose con- 1 straints due to the no-signaling principle and given measurement outcomes. Within the remaining 0 spaceofjointprobabilities,byusingconvexoptimization,wefindthemaximummutualinformation 2 Imax(2) between Bob (a user) and a binary-restricted Eve. Then, by considering a certain coarse BE b grainingmapping,weshowhowtogetaboundonImax,themaximalmutualinformationbetween BE e Bob and Eve, whose number of outcomes is not restricted, from the quantity Imax(2). Using the BE F Csisza´r-Ko¨rner formula and the calculated bound, we obtain the key generation rate. 7 1 PACSnumbers: 03.67.Dd ] h I. INTRODUCTION protocol [8] with decoys [9] for source imperfection and p squashing [10] for detector imperfection obtains security. - t (Seealsoarecentinnovativeproposalforauser-detector- n A nonlocal realistic model, the de Broglie-Bohm the- free protocol to get security with high loss [11].) a ory, is not only consistent with quantum theory but u also coherently describes measurement processes includ- In Refs. [6, 7], the security of no-signaling QKD is q ingwave-functioncollapse[1]. Thisraisesaquestionifall elegantly analyzed, in the case of individual attacks, [ realistic models must be nonlocal to be consistent with by using known properties of non-signaling correlations 1 quantum theory, which led to the discovery of Bell’s in- [12, 13]. In this paper, we take a straightforward ap- v equality [2, 3]. proachtono-signalingQKDwithuseofconvexoptimiza- 2 tion. We consider a protocol proposed by Ac´ın, Massar, 2 Recently, the nonlocality involved with Bell’s inequal- and Pironio (AMP) [7]. We analyze the security of the 8 ity and entanglement has entered a new phase of its de- AMPprotocol[14]inthecaseofanindividualattack. In 3 velopment. Itturnedoutthatentanglementisaconcrete order to make the problem feasible, first we make an as- . physical resource for information processing [4]. In the 2 sumptionthatEve(aneavesdropper)isrestrictedtohave 0 same context, interestingly, it was found that with non- onlybinaryoutcomes. Thenweconsiderallpossiblejoint 2 local correlations we can generate a cryptographic key, a probabilitiescompatiblewithagivensetofmeasurement 1 privaterandomsharedsequence,whosesecurityrelieson outcomesandtheno-signalingconstraint. Withinthere- v: only the no-signaling principle [5–7]. For this, no quan- mainingspaceofjointprobabilities,wemaximizethemu- i tumtheoryisusedforthesecurityanalysis. However,the X only currently available way to realize nonlocal correla- tual information between Bob (a user) and Eve, IBE(2), by convex optimization, to obtain Imax(2). Here 2 de- r tions is by using quantum entanglement. So these pro- BE a notes the number of Eve’s outcome. On the other hand, tocols are called no-signaling quantum key distribution we prove two Propositions involving coarse graining of (QKD).Remarkably,whatisusedtoshowsecurityinno- Eve’s outcome. Using these Propositions, we derive a signalingQKDisonlytheoutcomesofmeasurements. As bound on the maximal mutual information, Imax, be- longastheoutcomessatisfyacertaincondition, security BE tween Bob and Eve (whose number of outcomes is un- is provided, no matter how the outcomes are generated. restricted) from the quantity I (2). Then a key gen- Thus,no-signalingQKDhasdevice-independentsecurity. BE eration rate K is obtained by using the Csisz´ar-K¨orner Tosatisfythesecuritycondition,detectorefficiencymust formula [16] in our case, K = I −I . Here I is be much higher than what is currently achievable. If de- AB BE AB the mutual information between Alice and Bob. tectors with that high efficiency are available, however, it is also possible that the Bennett-Brassard 84 QKD The key generation rate obtained here is lower than the one in Ref. [7]. This is because information loss is inevitable in the coarse graining process. However, there can be other problems for which the straightforward ap- ∗Email: [email protected] proach is more feasible. As well, our method to derive a 2 boundonImax fromImax(2)(withsomelossofinforma- shouldbearbitraryinprinciple. Letusassume,however, BE BE tion)isapplicabletoothercases. Forexample, usingthe that Eve’s outcome is binary for now. value of Imax(2) obtained in Ref. [17] from monogamy Let us write constraints for the joint probabilities. BE properties,aboundonImaxcanbeimmediatelyobtained First, they satisfy normalization BE by our method (what can be obtained from Eve’s guess- (cid:88) ing probability on Bob’s outcome is not I but I (2) P(a,b,e|x,y)=1 (2) BE BE [18]). a,b,e foreachx,y. Letusdenotethemarginaldistributionfor II. Imax(2) BY CONVEX OPTIMIZATION Alice and Bob, (cid:80) P(a,b,e|x,y), by P(a,b,(cid:52)|x,y). BE e The marginal distributions corresponding to the state A. AMP protocol in Eq. (1) should be consistent with the measurement outcomes. For the measurement basis choice (x=0,y = 0) we have Twousers,AliceandBob,attempttodistributeaBell √ state, |φ+(cid:105)=(1/ 2)(|0(cid:105) |0(cid:105) +|1(cid:105) |1(cid:105) ), where A and A B A B p 1−p B denote Alice and Bob, respectively, and |0(cid:105) and |1(cid:105) P(0,0,(cid:52)|0,0) = P(1,1,(cid:52)|0,0)= + , 2 4 compose an orthonormal basis of a quantum bit (qubit). 1−p To mimic a realistic case with channel noise, we assume P(0,1,(cid:52)|0,0) = P(1,0,(cid:52)|0,0)= . (3) 4 the Bell state was transformed to a Werner state I For (x=0,y =1), where there is no correlation, ρ=p|φ+(cid:105)(cid:104)φ+|+(1−p) , (1) 4 1 P(a,b,(cid:52)|0,1)= (4) where 0 ≤ p ≤ 1. Although we use the Werner state to 4 model potential data, our method does not rely on this. For each copy of the distributed state, Alice chooses the for each a and b. For (x=1,y =0), (x=1,y =1), and value of an index x among 0,1, and 2 with probabili- (x=2,y =0), ties q, (1−q)/2, and (1−q)/2, respectively. Then she P(0,0,(cid:52)|x,y) = P(1,1,(cid:52)|x,y) performs a measurement M on her qubit. M is a mea- x 0 surement composed of the projections {|+(cid:105)(cid:104)+|,|−(cid:105)(cid:104)−|} p 1−p √ = 0.854 + ≡α where |±(cid:105) = (1/ 2)(|0(cid:105) ± |1(cid:105)). M and M are mea- 2 4 1 2 surements composed of {|π/4(cid:105)(cid:104)π/4|,|5π/4(cid:105)(cid:104)5π/4|} and P(0,1,(cid:52)|x,y) = P(1,0,(cid:52)|x,y) {|−π/4(cid:105)(cid:104)−π/4|,|−5π/4(cid:105)(cid:104)−5π/4|}, respectively. Here, p 1−p √ = 0.146 + ≡β, (5) |φ(cid:105)=(1/ 2)(|0(cid:105)+eiφ|1(cid:105))isastateobtainedbyrotating 2 4 the state |+(cid:105) around the z-axis by an angle φ. Bob also chooses a value of his index y for each copy, either 0 or where the two numerical values, 0.854 and 0.146, are ob- 1, with probabilities q(cid:48) and 1−q(cid:48), respectively. Then he tained from measurement outcomes for the Bell state. performsameasurementN onhisqubit. HereN =M For (x=2,y =1), y 0 0 andN iscomposedof{|π/2(cid:105)(cid:104)π/2|,|3π/2(cid:105)(cid:104)3π/2|}. Next, 1 P(0,0,(cid:52)|2,1)=P(1,1,(cid:52)|2,1) = β both Alice and Bob publicly announce their values x and y for each copy. Measurement outcomes in the case P(0,1,(cid:52)|2,1)=P(1,0,(cid:52)|2,1) = α. (6) x = y = 0 are kept and used to generate the key. Out- comes from other cases are publicly announced to esti- Now we consider no-signaling conditions. Because the mate Eve’s information. Alice and Bob choose q and q(cid:48) marginaldistributionforAliceandEvemustbeindepen- closeto1sothatalmosteventsareinthecasex=y =0. dent of Bob’s basis choice, This does not affect the security in the asymptotic case P(a,(cid:52),e|x,0)=P(a,(cid:52),e|x,1) (7) we consider. foreachx. Hereweuseanotationformarginaldistribu- tions analogous to the previous one. Similarly, B. Constraints on the probability distributions P((cid:52),b,e|0,y)=P((cid:52),b,e|1,y)=P((cid:52),b,e|2,y) (8) We assume an individual attack in which Eve follows the same procedure for each instance. For each choice for each y. Another no-signaling constraint is that Eve’s of measurements x and y by Alice and Bob, there is a marginal distribution is independent of the basis choices jointprobabilityformeasurementoutcomesa,b,eforAl- of Alice and Bob, ice, Bob, and Eve, respectively. The joint probability for a,b,e, conditioned on measurements x and y is denoted P((cid:52),(cid:52),e|x,y)=P((cid:52),(cid:52),e|0,0) (9) by P(a,b,e|x,y). Here, a and b are binary variables ac- cordingtotheprotocol. ThenumberofEve’soutcomese for each x,y. 3 1.1 Proposition-2: The maximal mutual information, Imax, is bounded by the sum of the maximal mutual 1 BE information in the binary case, Imax(2), and χ, namely BE 0.9 Imax ≤Imax(2)+χ. BE BE on 0.8 Proof: Suppose that ati nform 0.7 IBmEax >IBmEax(2)+χ. (10) ual i 0.6 Applying the prescribed coarse graining to the distribu- Mut 0.5 tion P(i,j) corresponding to Imax, we get a probabil- BE 0.4 IEBmax(2) ity distribution Q(i,j) which gives another value of the 0.3 Upper bound on IEBmIAaBx b(1in0)arayn-cdaPserompuotsuitaiolnin-1f,orwmeafitinodn,I(cid:48)IE(cid:48) B(2()2)>. IBmyaxin(2e)q.uTalhitiys 0.2 EB BE 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 is a contradiction because Imax(2) is maximal. (cid:3) BE p FIG.1: Mutualinformationsdependingonthenoiseparame- IV. DISCUSSION AND CONCLUSION terp. PositivekeyispossibleintheregionwhereImax (dash- BE dotted line) is smaller than I (dashed line). This occurs AB for p (cid:38) 0.936. The maxI curve should actually go higher Using the Csisz´ar-K¨orner formula [16] and BE than1(itisequaltothesolidImax(2)curveoffsetbyχ),but Proposition-2, we can get a a lower bound on the BE wehavetruncateditat1forclarity. Thisdoesnotaffectany key generation rate statements about the key since I ≤1. AB K =I −Imax(2)−χ. (11) AB BE The region where we have non-zero K is 0.936 ≤ p ≤ 1. C. Maximal mutual information in binary case The maximal K, which is obtained in noiseless case, p= 1, is0.295. Thisissmallerthanthecorrespondingvalue, We maximize the mutual information, I (2), within 0.414, in Ref. [6]. This is because of information loss BE the constraints (2)-(9) by convex optimization (see Ap- in the coarse graining. We overestimated the amount of pendixA).AswecanseeinFig. 1,intheregimep< √1 informationthatEvelosesinthecoarsegraining. Bythe 2 same reasoning, the minimal value of p for non-zero K, where the Werner state admits a local realistic model, EvehasfullinformationaboutBob,namelyImax(2)=1, 0.936,isgreaterthanthecorrespondingthresholdinRef. BE [6]. so there can be no secret key. However, in the regime where √1 ≤p≤1,Eve’sinformationisrestricted. When However, let us suppose that Eve is restricted to have 2 onlybinaryoutcomes. ThenwehaveK =I −Imax(2) p=1, Imax(2)(cid:39)0.383 and I is equal to 1. AB BE BE AB which gives a maximal K =0.617. This exceeds the cor- responding value in Ref. [6]. This is not strange because here Eve is restricted. Users can get more key for re- III. DERIVATION OF A BOUND ON IBmEax stricted Eve than for unrestricted Eve. FROM Imax(2) BE Tosummarize,wetookastraightforwardapproachtoa no-signaling QKD protocol, the AMP protocol. Assum- Let us consider a joint probability, P(i,j), between ing an individual attack, we consider all possible joint two parties. Here i is a binary random variable but j ∈ probabilities. Here we supposed temporarily that Eve’s {0,1,...,N}, where N is arbitrary. We consider a coarse strategy is binary to reduce the dimension of the space grainingmappingforj inwhichthenumberofoutcomes, ofjointprobabilities. We imposedconstraints dueto the N, is reduced to 2. The joint probability after coarse no-signalingprincipleandmeasurementoutcomes,which graining is denoted by Q(i,j). Let us denote the mutual reducethedimensionofthespace. Byusingconvexopti- informationduetoP(i,j)beforecoarsegrainingbyI(N), mization,wemaximizedEve’sinformationonBobwithin and denote the mutual information due to Q(i,j) after theremainingspace,toobtainImax(2). Thenbyconsid- BE coarse graining by I(2). The coarse graining reduces the ering a coarse graining mapping, we showed how to get mutual information. We set the amount of reduction to a bound on the mutual information between an unre- be ∆≡I(N)−I(2). stricted Eve and Bob, Imax, from the quantity Imax(2). BE BE Proposition-1: Foraparticularprescribedcoarsegrain- UsingtheCsisz´ar-K¨ornerformula[16]andthebound,we ing,theinformationloss∆isboundedbyacertainquan- got a non-zero key generation rate. tityχ(cid:39)0.322,namely∆≤χ(adescriptionofthechosen coarse graining and the proof of the bound are given in Appendix B) Acknowledgement Because N is arbitrary, we can say that Imax(N) = BE Imax. Now, using Proposition-1, we can get a bound for This study was supported by Basic Science Research BE Imax. Program through the National Research Foundation of BE 4 Korea (NRF) funded by the Ministry of Education, Sci- ence and Technology (2010-0007208), and by National Research Foundation and Ministry of Education, Singa- pore. NK acknowledges the Ontario Graduate Scholar- ship program for support. Appendix A: Maximizing I (2) BE Herewemaximizethemutualinformationforabinary- restricted Eve, I (2), within these constraints (2)- BE (9) by convex optimization. For visual convenience, P(a,b,e|x,y) are denoted as: FIG. 2: I (2) is a convex function with respect to the vari- BE P(a,b,e|0,0) = xabe, P(a,b,e|0,1)=yabe, ables a≡x0+x4 and b≡x2+x6 (upper surface). As well, the given constraints define a convex region (lower surface). P(a,b,e|1,0) = z , P(a,b,e|1,1)=u , abe abe P(a,b,e|2,0) = v , P(a,b,e|2,1)=w . (A1) abe abe where j =0,1,2,3. We can also see that, by Eqs. (A2)- We regard abe as a binary number, for example, (A5), the case when j = 0,2 implies the case when j = P(1,0,1|0,0)=x =x . 101 5 1,3, respectively. Thus the latter cases can be removed. Now let us rewrite the constraints regarding measure- We can verify that Eqs. (A6) and (A7) (or equivalently, ment outcomes. For Eqs. (3) and (4), we have, respec- Eqs. (7) and (8)) lead to Eq. (9), which can thus be tively, removed. As a result, we can remove all variables B i p 1−p where B =x,y,z,u,v,w and i is an odd number. x +x = x +x = + , 0 1 6 7 2 4 Therefore, by non-negativity of each quantity, the 1−p space in which we optimize I (2) is as follows: x +x = x +x = , (A2) BE 2 3 4 5 4 p 1−p 1−p and 0≤x ≤ + , 0≤x ≤ , (A8) k 2 4 l 4 1 y +y =y +y =y +y =y +y = . (A3) 0 1 2 3 4 5 6 7 4 where k =0,6 and l=2,4, For Eq. (5), we have 1 0≤y ≤ , (A9) A +A = A +A =α j 4 0 1 6 7 A +A = A +A =β, (A4) where j=0,2,4,6, and 2 3 4 5 where A=z,u,v. For Eqs. (6), we have 0≤A ≤α, 0≤A ≤β, k l w +w = w +w =β 0≤w ≤β, 0≤w ≤α, (A10) 0 1 6 7 k l w +w = w +w =α. (A5) 2 3 4 5 where A = z,u,v and k = 0,6 and l = 2,4. These We can see that Eqs. (A2)-(A5) make the normalization constraints are those that remain in Eqs. (A6) and (A7) in Eq. (2) satisfied. Thus the normalization condition after removing odd numbered variables. can be removed. To find Imax(2), we use linear programming. Of BE Theno-signalingconditioninEq. (7)canbeexpressed course, the function I (2) is not linear, so we cannot BE as use it directly as the objective function in a linear pro- gram. However, we will show that I (2) is a convex x +x = y +y , BE i i+2 i i+2 function in the relevant variables (see Fig. 2), which will zi+zi+2 = ui+ui+2, allow us to calculate its maximum with the assistance v +v = w +w , (A6) of a linear program. Now, because the key is generated i i+2 i i+2 only from the results where x = y = 0, we need to cal- where i = 0,1 and 4,5. We can see that, by Eqs. (A2)- culate the mutual information of the joint distribution (A5), the case when i = 0,4 implies the case when 1,5, P(∆,b,e|0,0) ≡ R(b,e). It is useful to rewrite the mu- respectively. Thus the latter cases can be removed. The tualinformationI (2)asafunctionofEve’sconditional no-signaling condition in Eq. (8) can be expressed as BE distribution. Forthis,Bob’smarginaldistribution,which x +x = z +y , is fixed by the measurement outcomes, will be denoted j j+4 j j+4 as R(b)≡R(b,∆)= 1 (b=0,1). Let zj +zj+4 = vj +vj+4, 2 yj +yj+4 = uj +uj+4, R(b,e) R(e|b)= =2R(b,e) (A11) uj +uj+4 = wj +wj+4, (A7) R(b) 5 denote the distribution of Eve’s outcomes conditioned of C is equally valid. To this end, we consider the corre- on Bob’s outcomes, all in the 0 basis (this is different lation function fromelsewhere,whereconditioningreferstodifferentba- sis choices). Then the mutual information is given by E(a,b)=a−b. (A16) (cid:16) (cid:17) I (2) =(cid:80) R(b)R(e|b)log R(e|b) Since this function is linear, it is suitable to use as the BE b,e 2 (cid:80)bR(b)R(e|b) objective function of a linear program. Specifically, for (cid:16) (cid:17) = 1(cid:80) R(e|b)log R(e|b) (A12) fixed value of the noise parameter p, we perform the fol- 2 b,e 2 1(cid:80) R(e|b) 2 b lowing optimization: We note the following fundamental property of the (cid:40) mutual information: when a marginal is fixed, the re- max E(a,b) Emax = (A17) maining function (Eq. (A12)) is convex with respect to subject to (a,b)∈C. the conditional distribution R(e|b) [19]. Since the con- ditional probabilities R(e|b) and the joint probabilities Because E is convex, it also achieves its maximum at R(b,e) are related by a simple linear rescaling, the mu- someextremepointofC. Ingeneral,weshouldnotexpect tual information IBE(2) is therefore a convex function IBE(2)andEtoachievetheirmaximalvaluesatthesame of the joint probabilities R(b,e). Furthermore, we can extreme points. However, by using the simpler function reducethenumberofindependentvariablesinthedistri- E, we can deduce that the set C has a very simple set butionR(b,e)fromfourtotwo. FromEq. (A2),wehave of extreme points. Indeed, modulo symmetry, there is the following constraints: only one other extreme point to find. The above linear program yields the following extreme point, for which E R(0|0)+R(1|0) =2(x +x +x +x )=1 0 1 4 5 achieves its maximum: R(0|1)+R(1|1) =2(x +x +x +x )=1 (A13) 2 3 6 7 (cid:40) (1,0) p∈[0,√1 ) Thiscorrespondstotakingatwo-dimensionallinearslice (a∗,b∗)= (12− √p ,0) p∈[√1 ,21] . (A18) through the ostensibly four variable function IBE(2). 2 2 We will choose the two independent variables to be a ≡ x +x (representing correlation) and b ≡ x +x We conclude that when p ∈ [0,√1 ), the set C is simply 0 4 2 6 2 (representinganticorrelation). Thefinalformforthemu- thesquare[0,1]×[0,1]. ThemaximumofI (2)inthis 2 2 BE tual information is case occurs at (1,0) and is exactly 1. This corresponds 2 to the parameter regime where the Werner state admits I (2)=H(B)+H(E)−H(B,E) (A14) BE a local realistic model [7]. From the symmetry of the set C, we conclude that with the Shannon entropies therearethreeothervalidextremepointsat(b ,a ),(1− ∗ ∗ 2 H(B) = 1 a ,1−b ),and(1−b ,1−a ). Aswell,thefunctionE is ∗ 2 ∗ 2 ∗ 2 ∗ constantalongthelineb=a−a ,whichjoinstheextreme H(E) = −[(a+b)log (a+b) ∗ 2 points (a ,b ) and (1 −b ,1 −a ) [20]. Finally, there +(1−(a+b))log (1−(a+b))] ∗ ∗ 2 ∗ 2 ∗ 2 cannot be any extreme points in the region b < a−a , ∗ H(B,E) = −[alog a+(1 −a)log (1 −a) since in this region E(a,b) > E(a ,b ) = Emax. By 2 2 2 2 ∗ ∗ +blog b+(1 −b)log (1 −b)]. (A15) symmetry, this holds as well for the region b > a+a∗. 2 2 2 2 Thus, weconcludethatC hasexactlytheextremepoints Thisisaconvexfunctionoverthedomain(a,b)∈[0,1]× that have already been listed. Since IBE(2)|(0,0) = 0, [0,12]. Moreover, it is symmetric under the transform2 a- IBmEax(2) is therefore found to be IBE(2)|(a∗,b∗). tions (a,b)↔(b,a) and (a,b)↔(1 −a,1 −b). 2 2 Next, we note that the constraints (A8)-(A10) define a convex set. We define C ∈ [0,1]×[0,1] as the pro- Appendix B: Proof of Proposition-1 2 2 jection of this set onto the (a,b)-plane. We notice that C is convex and is symmetric under the same transfor- In this appendix, we prove Proposition-1, namely that mations as I (2) given above. To maximize the convex the information loss ∆ can be bounded by considering BE function I (2) on the convex set C, we need only eval- a particular coarse graining. Let us consider conditional BE uate I (2) at the extreme points of C. Therefore, we probabilities P(0|j) and P(1|j) due to the joint proba- BE must find the extreme points of C. It is easy to see that bility P(i,j). We re-label j such that P(0|j)/P(1|j) ≥ choosing all even parameters (e.g. x ,v , etc.) to be 0 P(0|j+1)/P(1|j+1) for each j. If P(0|j)/P(1|j)≥1/2 2 6 satisfies the constraints. Therefore (0,0) ∈ C. By sym- for all j, then J is defined to be N. Otherwise, there metry, we must also have (1,1) ∈ C. These two points exists a J such that P(0|J)/P(1|J) ≥ 1/2 and 1/2 > 2 2 are clearly extreme points. To find the remaining ex- P(0|J+1)/P(1|J+1). LetP˜(i,j)denotethejointprob- treme points, we will use linear programming. We point abilitiesafterre-labeling. Thesejointprobabilitiescanbe outthatanyothermethodforfindingtheextremepoints writtenasP˜(i,j)=P˜(i|j)P˜(j),whereP˜(j)isamarginal 6 distribution for Bob. We will calculate the mutual infor- where q ≡ P˜(j)/Q(0) and r ≡ P˜(j)/Q(1), which have j j mation for this (N +1)-outcome distribution: the properties (cid:80)J q = 1 and (cid:80)N r = 1. We set j=0 j j=J+1 j P˜(0|j) ≡ m . Let us estimate the first quantity in Eq. I(N) = H(i)−H(i|j) j (B5). By concavity of H and Eq. (B3), it is clear that = H(i)−(cid:88)N H[P˜(0|j)]P˜(j), (B1) µ(cid:80)≡J HH[Q[m(0|]0q)]≥−0(cid:80). LJj=e0tHus[Pi˜n(t0r|ojd)]uqcje=g H(x[)(cid:80)=Jj=2x0.mTjqhje]n− j=0 j=0 j j 0 wherewehavewrittentheconditionalentropyasabinary entropy function H[q]≡−[qlog q+(1−q)log (1−q)]. 2 2 Now let us describe the prescription for the coarse graining. It is a mapping for j in which all j ≤ J are J J mapped to 0 and the others are mapped to 1. The joint (cid:88) (cid:88) µ = H[ m q ]− H[m ]q j j j j probabilities after the mapping are j=0 j=0 Q(0,0) = (cid:88)J P˜(0|j)P˜(j), + {−g0[(cid:88)J mjqj]+(cid:88)J g0[mj]qj} j=0 j=0 j=0 Q(1,0) = (cid:88)J P˜(1|j)P˜(j), = H[m¯0]−g0[m¯0]+(cid:88)J {−H[mj]+g0[mj]}qj, j=0 j=0 ≤ H[m¯ ]−g [m¯ ] N 0 0 0 Q(0,1) = (cid:88) P˜(0|j)P˜(j), ≤ max {H[x]−g [x]}≡χ(cid:39)0.322. (B6) 0 0≤x≤1 j=J+1 2 N Q(1,1) = (cid:88) P˜(1|j)P˜(j). (B2) j=J+1 Here we can calculate conditional probabilities We have used (cid:80)J m q ≡ m¯ , the linearity of g (x), j=0 j j 0 0 Q(i,j) and the bound −H[mj] + g0[mj] ≤ 0 (which can be Q(i|j)= , (B3) seen considering their graphs). Similarly, by introducing Q(j) g (x) = −2x+2 and (cid:80)N m r ≡ x¯ , we can estimate 1 j=J j j 1 where Q(j) is the marginal distribution for Eve after the the second quantity in Eq. (B5) which turns out to be coarse graining; that is, Q(0) = (cid:80)J P˜(j) and Q(1) = the same form as the first one. j=0 (cid:80)N P˜(j). Now we are ready to get j=J+1 By Eq. (B5) and these results, we get I(2) = H(i)−H(i|j) =H(i) − {H[Q(0|0)]Q(0)+H[Q(0|1)]Q(1)}.(B4) From Eqs. (B1) and (B4), we get ∆ = I(N)−I(2) ∆≤χQ(0)+χQ(1)=χ. (B7) J = {H[Q(0|0)]−(cid:88)H[P˜(0|j)]q }Q(0) j j=0 N +{H[Q(0|1)]− (cid:88) H[P˜(0|j)]r }Q(1),(B5) (cid:3) j j=J+1 [1] D. Bohm and B. Hiley, The Undivided Universe, (Rout- [4] M. A. Nielsen and I. L. Chuang, Quantum Computa- ledge, London, UK, 1993); introduced in Ref. [2]. tionandQuantumInformation,(CambridgeUniv.Press, [2] J. S. Bell, Speakable and Unspeakable in Quantum Me- Cambridge, U.K., 2000.) chanics, (Cambridge University Press, Cambridge, UK, [5] J. Barrett, L. Hardy, and A. Kent, Phys. Rev. Lett. 95, 1987). 010503 (2005). [3] J. S. Bell, Physics 1, 195 (1964), reprinted in Ref. [2]. [6] A.Ac´ın,N.Gisin,andL.Masanes,Phys.Rev.Lett.97, 7 010503 (2006). lated, point of view, we give a new name. [7] A. Ac´ın, S. Massar, and S. Pironio, New J. Phys. 8, 126 [15] A. K. Ekert, Phys. Rev. Lett. 67, 661 (1991). (2006). [16] I. Csisza´r and J. Ko¨rner, IEEE Trans. Inf. Theory 24, [8] C.H.BennettandG.Brassard,Proc.IEEEInt.Conf.on 339 (1978). Computers, Systems, and Signal Processing, Bangalore [17] M. Pawlowski, Phys. Rev. A 82, 032313 (2010). (IEEE, New York, 1984), p.175. [18] W.-Y. Hwang and O. Gittsovich, to appear in PRA. [9] W.-Y. Hwang, Phys. Rev. Lett. 91, 057901 (2003). [19] T.M.CoverandJ.A.Thomas,ElementsofInformation [10] X. Ma, T. Moroder, and N. Lu¨tkenhaus, quant- Theory, (John Wiley and Sons, Inc., 1991). ph/0812.4301. [20] Since E is constant on this line, it is possible that a lin- [11] H.-K. Lo, M. Curty, and B. Qi, quant-ph/1109.1473. earprogramwouldnumericallyfindthemaximumnotat [12] J.Barrett,N.Linden,S.Massar,S.Pironio,S.Popescu, the extreme points, but somewhere else along this line. and D. Roberts, Phys. Rev. A 71, 022101 (2005). For us, this was not a problem, since it is easily verified [13] N. S. Jones and L. Masanes, Phys. Rev. A 72, 052312 that the point (a ,b ) is extremal. In general, to avoid ∗ ∗ (2005). misidentifying extreme points, one should check the fea- [14] With respect to physical implementation, the protocol sibility of other points where E is equal to the found is almost the same as the Ekert protocol [15]. However, maximum. because security is analyzed with a different, though re-

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.