ebook img

Exponential Source/Channel Duality PDF

0.12 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Exponential Source/Channel Duality

Exponential Source/Channel Duality Sergey Tridenski Ram Zamir EE - Systems Department EE - Systems Department Tel Aviv University, Israel Tel Aviv University, Israel Email: [email protected] Email: [email protected] Abstract—We propose a source/channel duality in the expo- arerelateddirectlyvia introductionof a costconstraintonthe 7 nential regime, where success/failure in source coding parallels channel input [1], [2], [3], or using covering/packing duality 1 error/correctness in channel coding, and a distortion constraint [4]. In order to look at the similarities and the differences 0 becomesalog-likelihoodratio(LLR)threshold.Weestablishthis of the expressions for the RDF and capacity more closely, 2 duality by first deriving exact exponents for lossy coding of a memoryless source P, at distortion D, for a general i.i.d. code- let us rewrite them in a unified fashion, with the help of the n bookdistributionQ,forbothencodingsuccess(R<R(P,Q,D)) Kullback-Leibler divergence D(·k·), as follows: a and failure (R > R(P,Q,D)). We then turn to maximum J likelihood (ML) decoding over a memoryless channel P with an R(P,D) = min min D(P ◦W kP ×Q), (1) 6 i.i.d. input Q, and show that if we substitute P =QP, Q=Q, Q(xˆ) W(xˆ|x): 2 and D = 0 under the LLR distortion measure, then the exact d(P◦W)≤D ] edxepcoodnienngts(Rfor>dIe(cQod,iPn)g)-efrorlolorw(Ras<spIe(cQia,lPca)s)easnodf sthtreicetxcpoornreenctts- C(P) = max min D (Q◦P)◦W k(Q◦P)×Q , T Q(·) W(xˆ|x,y): for source encoding success/failure, respectively. Moreover, by I letting the threshold D take general values, the exact random- d((Q◦P)◦W)≤0 (cid:0) (cid:1) . (2) s coding exponents for erasure (D>0) and list decoding (D<0) c whereinthecaseofthecapacityweuseaparticulardistortion under the simplified Forney decoder are obtained. Finally, we [ derive the exact random-coding exponent for Forney’s optimum measure, defined by the LLR as 1 tradeofferasure/listdecoder,andshowthatattheerasureregime P(y|x) v it coincides with Forney’s lower bound and with the simplified d (x,y),xˆ , ln . (3) 5 decoder exponent. 1 P(y|xˆ) 9 IndexTerms—erasure/listdecoding,randomcodingexponents, Note, that the dist(cid:0)ortion con(cid:1)straint in the capacity case (2) is 6 correct-decoding exponent, source coding exponents. D = 0. More concisely, the expressions (1) and (2) may be 7 0 I. INTRODUCTION written with the help of the function R(P,Q,D), defined as . The aim of our study is to define an analogybetweenlossy the inner min in (1), [5], which represents the rate in lossy 1 coding of a source P using an i.i.d. codebook Q under a 0 source coding and coding through a noisy channel, allowing distortion constraint D: 7 to translate the terms and results between the two branches. 1 We consideran analogy,in whichencodingforsourcescorre- R(P,D) = min R(P,Q,D), (4) : v sponds to decoding for channels, encoding success translates Q i to decoding error, and a source translates to a channel input- C(P) = max R(Q◦P, Q, 0). (5) X output pair. Channel coding, in the random coding setting Q ar with a fixed generating distribution Q, emerges as a special The expression for the capacity (2), or (5), follows by our case oflossy sourcecoding.Althoughotheranalogiesmaybe new results, and it can also be shown directly, by the method possible,theproposedanalogyrequiresminimalgeneralization of Lagrange multipliers, that, for the distortion measure (3), on the source coding part. Generalization of the channel R(Q◦P, Q, 0) = I(Q◦P), where I(Q◦P) is the mutual decoder,onthe otherhand,leadsto a broadercorrespondence information. Obviously, minQR(P,Q,D) and maxQR(Q◦ between the two branches, such that correct-decoding event P, Q, 0) are two mathematically different problems and it is for channels (which becomes a rare event for a sufficiently difficult to relate between them. On the other hand, for a large codebook) translates to encoding failure for sources. given Q, i.e. before minimization/maximization over Q, the In other works on the source/channel duality, the rate- expression for channels in (2) is just a special case of the distortion function of a DMS P(x) expression for sources in (1) or (4). Therefore, in this work we focus on the source/channelanalogy for a given Q. In the R(P,D) = min I(X;Xˆ), rest of the paper,Q plays the role of a generatingdistribution W(xˆ|x): E[d(X,Xˆ)]≤D of an i.i.d. random codebook. and the capacity of a DMC P(y|x) We extend the described analogy to the framework of C(P) = max I(X;Y) random coding exponents. In our terminology, the source Q(x) encodingfailureisthesameasthesourcecodingerrordefined in [6]. We derive asymptotically-exact exponents of source 1This research was supported in part by the Israel Academy of Science, grant#676/15. encodingfailureandsuccess, whichare similar in formto the bestlossysourcecodingexponentsgivenin[6]and[7,p.158], Consider a reproduction codebook of M = enR random respectively, but correspond to i.i.d. random code ensembles codewords Xˆ of length n, generated i.i.d. according to the m generated according to an arbitrary (not necessarily optimal) distributionQ.LetXbearandomsourcesequenceoflengthn Q.Suchensemblesprovetobe usefulforadaptivesourceand from the DMS P. Let us define encodingsuccess as an event channel coding [5], [8]. E , ∃m : d(X,Xˆ ) ≤ nD . (8) Next, we modify the ML channel decoder with a threshold s m D. The resulting decoder can be identified as a simplified Then, our results f(cid:8)or encoding success exponent(cid:9)can be for- erasure/list decoder [9, eq. 11a], suggested by Forney as an mulated as follows: approximation to his optimum tradeoff decoder [9, eq. 11]. Definition 1 (Implicit formula): Exact random coding exponents for the simplified decoder, via the source/channel analogy, then become corollaries of E (Q,R,D) , min D(TkP)+ R(T,Q,D)−R + . s oursourcecodingexponents,whereGallager’srandomcoding T(x) errorexponentoftheMLdecoderisobtainedasaspecialcase n (cid:12) (cid:12) o(9) (cid:12) (cid:12) Note that this expression is zero for R≥R(P,Q,D). for D =0. Theorem 1 (Explicit formula): The fixed composition version of the random coding error exponent for the simplified decoder was derived recently in E (Q,R,D) = sup sup E (s,ρ,Q,D)−ρR . [10]. In comparison, the i.i.d. random coding error exponent, s 0≤ρ≤1 s≥0 0 derived here, can be expressed similarly to Forney’s random (cid:8) (cid:9)(10) coding bound for the optimum tradeoff decoder [9, eq. 24], Theorem 2 (Encoding success exponent): and therefore can be easily compared to it. We show that the 1 exact i.i.d. random coding exponentof the simplified decoder lim − lnPr{Es} = Es(Q,R,D), (11) n→∞ n coincides with Forney’s lower bound on the random coding (cid:26) (cid:27) exponent[9, eq. 24] for T ≡D ≥0. It follows, that Forney’s except possibly for D = D = min d(x,xˆ), when the min x,xˆ lower bound is tight for random codes, also with respect to right-hand side is a lower bound. the optimum tradeoff decoder, for T ≥0. Let us define encoding failure as a complementary event Finally, we derive an exact random coding exponent for E ,E c. Then,our resultsfor encodingfailure exponentcan f s Forney’s optimum tradeoff decoder [9, eq. 11] for all values be formulated as follows: of the threshold T. The resulting expression is also similar to Definition 2 (Implicit formula): the original Forney’s random coding bound [9, eq. 24] and we show directly that they coincide for the threshold T ≥ 0. Ef(Q,R,D) , min D(TkP). (12) T(x): R(T,Q,D)≥R This proves a conjecture in [11], and also extends/improves the results in [12], [11] for list decoding. Note that this expression is zero for R ≤ R(P,Q,D) and is In what follows, we present our results for sources, then considered +∞ if R>Rmax(D)=maxT(x)R(T,Q,D). translate them to the results for channels. We discuss briefly Wegiveanexplicitformula,whichdoesnotalwayscoincide the relation of the new channel exponentsto the error/correct with the implicit formula (12) for all R, but gives the best exponents of the ML decoder. Finally, we present our result convex (∪) lower bound for (12) as a function of R, for for Forney’s optimum tradeoff decoder. In the remainder of sufficiently lax distortion constraint D: thepaper,a selected proofisgiven.Therestofthe proofscan Theorem 3 (Explicit formula): For distortion constraint be found in [13]. D ≥ max min d(x,xˆ), x xˆ II. EXPONENTSFORSOURCES l. c. e. E (R) = sup inf E (s,−ρ,Q,D)+ρR .(13) f 0 We assume that the source alphabet X = {x : P(x) > ρ≥0 s≥0 (cid:8) (cid:9) (cid:8) (cid:9) 0} and the reproduction alphabet Xˆ = {xˆ : Q(xˆ) > 0} are For D < max min d(x,xˆ),theright-handsideexpression x xˆ finite.Assumealsoanadditivedistortionmeasure,ofarbitrary gives zero, which is strictly lower than E (Q,R,D), if R > f sign, d : X ×Xˆ → R, such that the distortion in a source- R(P,Q,D). reproduction pair (x,xˆ) of length n, is given by d(x,xˆ) = Note that the above explicit formula is similar to the lower ni=1d(xi,xˆi). For an arbitrary distortion constraint D and bound on the failure exponent given in [3] (except here it is a distribution T(x) over X, let us define a function withoutmaximizationoverQandpertainstotherandomcode P R(T,Q,D) , min D(T◦WkT×Q), (6) ensemble of distribution Q). Our result is also more specific W(xˆ|x): d(T◦W)≤D about the relationship (convex envelope) between the lower where d(T ◦ W) , T(x)W(xˆ|x)d(x,xˆ). If the set bound and the true exponent, given by (12) according to x,xˆ Theorem 4 (Encoding failure exponent): {W(xˆ|x) : d(T ◦W) ≤ D} is empty, then R(T,Q,D) , P +∞. For brevity, we define also the following function 1 lim − lnPr{E } = E (Q,R,D), (14) f f ρ n→∞ n (cid:26) (cid:27) E (s,ρ,Q,D) , −ln P(x) Q(xˆ)e−s[d(x,xˆ)−D] . 0 with the possible exception of points of discontinuity of the " # Xx Xxˆ (7) function Ef(R,D). III. EXPONENTSFORCHANNELS The best random coding exponent is given by Theorem 5 (Maximal decoding error exponent): We assume a DMC with finite input and output alpha- bets X and Y. For simplicity, we assume also that for any 1 sup lim − lnPr{E } = sup E (Q,R,D), (ax,cyod)e∈boXok×oYfthMech=anenneRlp+rob1abrialnitdyoPm(yc|oxd)ew>or0d.sCXonmsidoefr Q(x) n→∞ (cid:26) n e (cid:27) Q(x) e (25) length n, generated i.i.d. according to a distribution Q over for all (R,D). X. Without loss of generality, assume that message m is This result can be contrasted with the fixed composition transmitted. Let Y be a response, of length n, of the DMC exponent [10, eq. 29], and (together with the explicit form P to the input Xm. Let us define decoding error as an event: (23)) can be easily compared with Forney’s random coding P(Y|X ) bound [9, eq. 24]. Ee , ∃m′ 6= m : lnP(Y|Xm) ≤ nD , (15) Similarly,theresultsforthecorrectdecodingeventEc ,Eec (cid:26) m′ (cid:27) followassimplecorollariesoftheresultsforencodingfailure. correspondingtoa simplifiederasure/listdecoder[9, eq.11a]. The definition of the implicit expression for correct decoding Observe, from comparisonof the events(8) and (15), that the exponent parallels Definition 2: latter can be seen as a special case of the former. In (15), the channel input-output pair (Xm,Y) pays a role analogous to Ec∗(Q,R,D) , T(x,y): Rm(Ti,nQ,D)≥R D(T kQ◦P). (26) thesourcesequenceXin(8),andtheincorrectcodewordX plays a role analogous to the reproduction sequence Xˆ min′ The superscript ∗ serves to indicate that this exponent is m different from the correct decoding exponent of the ML (8). In the proposed analogy, the reproduction alphabet is the decoder, for D = 0, as here the receiver declares an error alphabetofincorrectcodewords,whichisX, andthealphabet also when there is only equality in (15), i.e. no tie-breaking. of the source is the product alphabet of the channel input- Thisdistinctionisimportantinthecaseofthecorrect-decoding output pair X ×Y. We make the following substitutions: exponent, but not in the case of the decoding error exponent. Xˆ = X −→ Xˆ (16) The following explicit formula gives the best convex (∪) X ×Y −→ X (17) lower bound for (26) as a function of R, for nonnegative distortion constraint D: Q(x)P(y|x) −→ P(x) (18) Corollary 3 (Explicit formula): For distortion constraint P(y|x) d (x,y),xˆ = ln −→ d(x,xˆ) (19) D ≥ 0, P(y|xˆ) l. c. e. E∗(R) = sup inf E (s,−ρ,Q,D)+ρR .(27) Defi(cid:0)nition (7)(cid:1)now acquires a specific form c ρ≥0 s≥0 0 (cid:8) (cid:9) (cid:8) (cid:9) E (s,ρ,Q,D) , For D < 0,theright-handsideexpressiongiveszero,which 0 −s ρ is strictly lower than Ec∗(Q,R,D), if R > R(Q◦P, Q, D). P(y|x) −ln Q(x)P(y|x) Q(xˆ) e−D . (20) Corollary 4 (Correct decoding exponent): P(y|xˆ) x,y " xˆ (cid:20) (cid:21) # 1 X X lim − lnPr{E } = E∗(Q,R,D), (28) Note, that the minimal distortion now dependson the support n→∞ n c c (cid:26) (cid:27) of the distribution Q: with the possible exception of points of discontinuity of the D (Q) , min min lnP(y|x). (21) function Ec∗(R,D). min y x,xˆ:Q(x)·Q(xˆ)>0 P(y|xˆ) IV. RELATION TOTHEEXPONENTSOF THEML DECODER The results for decoding error follow as simple corollaries of The maximum likelihooddecoder has the same error expo- the resultsforencodingsuccess. Thedefinitionofthe implicit nent as the decoder (15) with D =0. The Gallager exponent expression for decodingerror exponentparallels Definition 1: [14] is obtained from the explicit formula (23) with D =0. On the other hand, the correct-decoding exponent of the E (Q,R,D) , min D(TkQ◦P)+ R(T,Q,D)−R + , e ML decoder is given by T(x,y) n (cid:12) (cid:12)(2o2) + whereR(T,Q,D)isdefinedwithW(xˆ|(cid:12)x, y),asin(2).N(cid:12)ote, Ec(Q,R) = min D(T kQ◦P)+ R−R(T,Q,0) T(x,y) that Ee(Q,R,D) is zero for R≥R(Q◦P, Q, D). n (cid:12) (cid:12)(29o) (cid:12) (cid:12) Corollary 1 (Explicit formula): (=∗) sup E ( 1 , −ρ, Q, 0)+ρR (30) Ee(Q,R,D)= sup sup E0(s,ρ,Q,D)−ρR . (23) 0≤ρ<1 0 1−ρ 0≤ρ≤1 s≥0 (cid:8) (cid:9) (cid:8) (cid:9) = min D(U ◦W kQ◦P) Corollary 2 (Decoding error exponent): U(x),W(y|x) n + 1 + R−D(UkQ)−I(U ◦W) (31) lim − lnPr{E } = E (Q,R,D), (24) e e n→∞ (cid:26) n (cid:27) ≤ min D(cid:12)(Q◦W kQ◦P)+ R−I(Q(cid:12) ◦oW) + , (cid:12) (cid:12) except possibly for D = D (Q), given by (21), when the W(y|x) right-hand side is a lower bmouinnd. n (cid:12) (cid:12) (o32) (cid:12) (cid:12) where the equality (∗) is shown in [13], (31) is another our present terms (with D in place of T and a differently implicit formula, which is more convenientto derive (as well defined s, not scaled by ρ) as as convenient for the derivation of (30)), and (32) is the E (Q,R,D)= sup sup E (s,ρ,Q,D)−ρR . fixed composition exponent of Dueck and Ko¨rner [15]. Note, bound 0 0≤ρ≤1 0≤s≤1 that the correct-decoding exponent of the ML decoder (29) (cid:8) (3(cid:9)6) is different from the corresponding exponent of the decoder Observe, that indeed (15), (26) with D = 0. The difference is the result of the e min E (Q,R,D), E (Q,R,D) ≥ E (Q,R,D). tie-breaking, the ML decoder performs. Without tie-breaking, e bound the decoding can be termed as strict. The two nondecreasing The ne(cid:8)xt lemma shows, that the ex(cid:9)act exponents (25), (35), curves,givenby(29)and(26),withD =0,bothasafunction andForney’slowerbound,givenbythemaximumof(36)over ofR,coincideforslopes<1.Then,forgreaterR,thecorrect Q, coincide for D ≥0 (erasure regime). decoding exponent of the ML decoder continues to increase Lemma 1: For D ≥0 linearly, with constant slope = 1, while the exponent of the strict decoder (26), with D =0, eventually becomes +∞. Ee(Q,R,D) = Ebound(Q,R,D) Proof: V. RANDOMCODING ERROR EXPONENTOF THEERASURE/LIST OPTIMUMTRADEOFF DECODER E s = 1 , ρ, Q, D = 0 1+ρ In [9, eq. 11] the decoding error event, given that message (cid:0) (cid:1) P(y|x) −1+1ρ ρ m is transmitted, is defined as −ln Q(x)P(y|x) Q(xˆ) − ρ D P(y|xˆ) 1+ρ P(Y|X ) Xx,y "Xxˆ (cid:20) (cid:21) # Em , ln P(Ym|X ) < nD , (33) (∗) P(y|x) −s ρ (cid:26) m′6=m m′ (cid:27) ≥ −ln Q(x)P(y|x) Q(xˆ) −sρD P(y|xˆ) whichisdifferentthPantheerroreventofthesimplifieddecoder Xx,y "Xxˆ (cid:20) (cid:21) # (15). We derive an exact i.i.d. random coding error exponent = E0(s,ρ,Q,D), s ≥ 1+1ρ, for this decoder, for all values of the threshold D. The result where (∗) holds by Ho¨lder’s inequality for s ≥ 1 and is given by the minimum of two terms. One of the terms 1+ρ D ≥ 0. We conclude, that is given by (23) and corresponds to the error exponent of the source/channelduality decoder,and the other term is very sup sup E (s,ρ,Q,D) − ρR = 0 similar, defined as 0≤ρ≤1 s≥0 (cid:8) (cid:9) sup sup E (s,ρ,Q,D) − ρR . Ee(Q,R,D), sup sup E (s,ρ,Q,D)−ρR . (34) 0≤ρ≤1 0≤s≤1 0 0 ρ≥0 0≤s≤1 (cid:8) (cid:9) (cid:8) (cid:9) As for the D < 0 case (list decoding regime), we note that Theorem 6 (Error exponent for optimum tradeoff decoder): the exponent(25) becomes+∞ for D <0, and the exponent (35) becomes +∞ for 0 < R < −D, while Forney’s lower 1 bound stays finite. For details see [13]. sup lim − lnPr{E } = m Q(x) n→∞ (cid:26) n (cid:27) VI. A SELECTEDPROOF sup min Ee(Q,R,D), E (Q,R,D) , (35) e Here we derive a lower bound on the encoding failure Q(x) (cid:8) (cid:9) exponent, which, together with an upper bound, derived in forall (R,D), with the possibleexception ofpointswith R= [13], results in Theorem 4. Due to the lack of space, all the −D, and points with 0> D ∈{D (Q)} (for R >−D), min Q other proofs are deferred to [13]. where {D (Q)} is a finite set, where still min Q The derivation uses a generic auxiliary lemma: Lemma 2: Let Z ∼ i.i.d Bernoulli e−nI , m = 1 m sup liminf − lnPr{E } ≥ Q(x) n→∞ (cid:26) n m (cid:27) 1, 2, ..., enR. If I ≤ R − ǫ, with ǫ > 0, th(cid:16)en (cid:17) e sup min E (Q,R,D), Ee(Q,R,D) , enR Q(x) Pr Z = 0 < exp −enǫ . (37) (cid:8) (cid:9) m 1 ( ) sup limsup − lnPr{Em} ≤ mX=1 (cid:8) (cid:9) n Q(x) n→∞ (cid:26) (cid:27) We use the method fo types [7], with notation Px, T(Px) Qsu(xp) ǫl→im0 min Ee(Q, R − ǫ, D), Ee(Q, R, D − ǫ) , dfoitriotynpalestyapnedstaynpdetcylpaesscelsa,ssaensd, rPesxˆp|exc,tivTe(lPy.xˆW|xe,uXp)pefro-brocuonnd- (cid:8) (cid:9) with Ee(Q,R,D) and E (Q,R,D) given explicitly by (23) the probability of encoding failure as follows: e and (34). Pr{E } ≤ Pr X ∈ T(P ) × f x For comparison, the random coding lower bound given by [9, eq. 24], can be written, without maximization over Q, in Px: Rtypes(PxX,Q,D)≤R−2ǫ1 (cid:8) ≤1 (cid:9) | {z } d(PPxm,xˆxˆi|)nx≤: DPr (cid:26)Xm 1(cid:8)Xˆm∈T(Pxˆ|x,X)(cid:9)(m)=0(cid:12)(cid:12)(cid:12)Px(cid:27) PdTihxffeer◦reefWnocr∗nee,ssibsinetcawetetyehpneePdwixviet◦hrgWedne∗cneao,nmdasinPaaxtfou◦rnWcnt.i∗noOndboosfneroPvtexe,◦xtcWheaet,dhthna1es. + Pr X ∈ (cid:12)T(P ) bounded derivatives, and also the distortion measure d(x,xˆ) x is bounded, for any ǫ > 0 there exists n large enough, such Px: Rtypes(PxX,Q,D)≥R−2ǫ1 (cid:8) (cid:9) that the quantized dis2tribution W∗ satisfies = S + S . (38) n 1 2 D(P ◦W∗kP ×Q) ≤ D(P ◦W∗kP ×Q) + ǫ , x n x x x 2 (43) enR (a) S1 ≤ min Pr Zm =0 d(Px◦W∗n) ≤ D. XPx: Pxˆ|x: (mX=1 ) The last inequality implies Rtypes(Px,Q,D)≤R−2ǫ1 d(Px,xˆ)≤D D(P ◦W∗ kP ×Q) ≥ Rtypes(P ,Q,D). (44) enR x n x x (b) ≤ Pr B =0 The relations (44), (43), (42) together give m ( ) Px: Rtypes(PxX,Q,D)≤R−2ǫ1 mX=1 R(Px, Q, D − ǫ2) + ǫ2 ≥ Rtypes(Px,Q,D). (45) (c) ≤ exp −enǫ1 ≤ (n + 1)|X|·exp −enǫ1 . (39) This explains (e). (f) uses the definition XPx (cid:8) (cid:9) (cid:8) (cid:9) Eftypes(R,D) , min D(Px kP). (46) Px: R(Px,Q,D)≥R (d) (g)Etypes(R,D)isboundedfrombelowbyE (R,D)defined S2 ≤ exp −nD(Px kP) f f in (12). We conclude from (38), (39), (40): Px: Rtypes(PxX,Q,D)≥R−2ǫ1 (cid:8) (cid:9) 1 (e) liminf − lnPr{E } ≥ lim E (R − ǫ, D − ǫ). ≤ exp −nD(Px kP) n→∞ n f ǫ→0 f (cid:26) (cid:27) Px: R(Px,Q,DX−ǫ2)≥R−2ǫ1−ǫ2 (cid:8) (cid:9) REFERENCES (f) ≤ (n + 1)|X|exp −nEtypes(R−2ǫ −ǫ , D−ǫ ) [1] A.GuptaandS.Verdu´,“Operationaldualitybetweenlossycompression f 1 2 2 andchannelcoding,”IEEETrans.onInformationTheory,vol.57,no.6, (≤g) (n + 1)|X|exp(cid:8)−nEf(R−2ǫ1−ǫ2, D−ǫ2) .((cid:9)40) [2] pSp..P3r1a7d1h–an3,17J9.,CJhuonu.,20a1n1d.K. Ramchandran, “Duality between source coding and channel coding and its extension to the side information Explanation of steps: (cid:8)(a) holds for sufficiently large n,(cid:9)when case,” IEEE Trans. on Information Theory, vol. 49, no. 5, pp. 1181– 1203,May2003. Pr Xˆ ∈ T P , X X ∈ T(P ) ≥ [3] R. Blahut, “Hypothesis testing and Information Theory,” IEEE Trans. m xˆ|x x onInformationTheory,vol.20,no.4,pp.405–417, Jul.1974. exp(cid:8) −n D (cid:0)P P (cid:1)×(cid:12)Q + ǫ ,(cid:9) [4] S. Jana and R. Blahut, “Insight into source/channel duality and more x,xˆ x (cid:12) 1 based on an intuitive covering/packing lemma,” in IEEE International with n h (cid:0) (cid:13) (cid:1) io Symposium onInformation Theory,Jul.9-142006,pp.2329–2333. (cid:13) [5] R. Zamir and K. Rose, “Natural type selection in adaptive lossy compression,” IEEE Trans. on Information Theory, vol. 47, no. 1, pp. Z ∼Bernoulli exp −n D P P ×Q +ǫ . m x,xˆ x 1 99–111,Jan.2001. (b) holds for (cid:16) n h (cid:0) (cid:13)(cid:13) (cid:1) io(cid:17) [6] IKE.EMEaTrtroann,s.“EornroIrnfeoxrpmoanteinotnfTohresooruyr,cevoclo.d2i0n,gnwo.ith2,apfip.de1l9it7y–c1r9i9te,riMona,r”. 1974. B ∼Bernoulli exp −n Rtypes(P ,Q,D) + ǫ , [7] I. Csisza´r and J. Ko¨rner, Information Theory: Coding Theorems for m x 1 DiscreteMemoryless Systems. Academic Press,1981. where (cid:16) n (cid:2) (cid:3)o(cid:17) [8] S. Tridenski and R. Zamir, “Stochastic interpretation for the Arimoto algorithm,” inIEEEInformation TheoryWorkshop,Apr.2015. Rtypes(P ,Q,D) , min D P P ×Q . [9] G.D.Forney,Jr.,“Exponential errorboundsforerasure,list,anddeci- x x,xˆ x sion feedback schemes,” IEEE Trans. on Information Theory, vol. 14, Pxˆ|x: d(Px,xˆ)≤D no.2,pp.206–220,Mar.1968. (cid:0) (cid:13) (4(cid:1)1) (cid:13) [10] N. Weinberger and N. Merhav, “Simplified erasure/list decoding,” in (c) holds by Lemma 2 for IEEE International Symposium on Information Theory, Jun. 2015, pp. 2226–2230. types I = R (Px,Q,D)+ǫ1 ≤ R−2ǫ1+ǫ1 = R−ǫ1. [11] A. S. Baruch and N. Merhav, “Exact random coding exponents for erasuredecoding,”IEEETrans.onInformationTheory,vol.57,no.10, (d) uses the upper bound on the probability of a type. pp.6444–6454, Oct.2011. (e) Let W∗ denote the conditional distribution, achieving [12] N. Merhav, “Error exponents of erasure/list decoding revisited via momentsofdistanceenumerators,”IEEETrans.onInformationTheory, R(P , Q, D − ǫ )<+∞ for some ǫ > 0. This implies x 2 2 vol.54,no.10,pp.4439–4447, Oct.2008. [13] S. Tridenski and R. Zamir, “Analogy and duality between random D(Px◦W∗kPx×Q) = R(Px, Q, D − ǫ2), (42) channel codingandlossysourcecoding,” arxiv. d(P ◦W∗) ≤ D − ǫ . [14] R.G.Gallager,“Therandomcodingboundistightfortheaveragecode,” x 2 IEEETrans. onInformation Theory, vol.19, no.2,pp. 244–246, Mar. LetW∗ denoteaquantizedversionoftheconditionaldistribu- 1973. n [15] G.DueckandJ.Ko¨rner,“Reliability functionofadiscretememoryless tionW∗withvariableprecision1/ nPx(x) ,i.e.asetoftypes channel at rates above capacity,” IEEE Trans. on Information Theory, with denominators nP (x), such that the joint distribution vol.25,no.1,pp.82–85,Jan.1979. x (cid:0) (cid:1)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.