ebook img

Information Loss due to Finite Block Length in a Gaussian Line Network: An Improved Bound PDF

0.16 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Information Loss due to Finite Block Length in a Gaussian Line Network: An Improved Bound

Information Loss due to Finite Block Length in a Gaussian Line Network: An Improved Bound Ramanan Subramanian, Badri N. Vellambi, and Ingmar Land Institute for Telecommunications Research, University of South Australia, Australia. ramanan.subramanian, badri.vellambi, ingmar.land @unisa.edu.au { } Abstract—Aboundonthemaximuminformationtransmis- thatincludesacomparisonofourcurrentresultsinrelation sion rate through a cascade of Gaussian links is presented. to our previous results in [2]. The network model consists of a source node attempting to send a message drawn from a finite alphabet to a II. NOTATIONS ANDDEFINITIONS 3 sink, through a cascade of Additive White Gaussian Noise LetRbe theset ofallrealnumbersand N bethe setof 1 links each having an input power constraint. Intermediate 0 nodes are allowed to perform arbitrary encoding/decoding allnaturalnumbers.Naturallogarithmsareassumedunless 2 operations, but the block length and the encoding rate are thebaseisspecified.Thenotation represents 2 norm fixed.Theboundpresentedinthispaperisfundamentaland throughout. S denotes the sket·kof all M LM row- Jan dsiezpe,enbdlosckonlleyngotnh,thtreandsemsigisnsiopnarraamtee,taernsdnsaigmneally-t,ot-hneoinseetrwaotirok. sMtocrhoawst-isctomchaatrsMitci×ecsMm,aantdricSesM∗w×hMosdeernoowtessathreeisdeetn×otficaalll.M× 6 LetN Ndenotethecodelengthorblocklengthofthe ∈ 2 transmission scheme. A code rate R>0 is a real number I. INTRODUCTION such that 2NR is an integer. Let M , 1,2,3,...,2NR T] Transmission of messages through a series of links be the message alphabet. { } I corrupted by noise is a situation that occurs frequently Definition 1: For a certain P0 0, a rate R length N s. in communicationnetworks. When the transmission block code C with power constraint P0≥is an ordering of M = c length is allowed to be arbitrarily large, it is quite simple 2NR elements from RN, called codewords, such that the [ to show (using the data-processing inequality) that the power of any codeword is lower than P0: 1 maximuminformationtransferrateisequaltothecapacity 1 v of the weakest link. The possibilities in the finite block C =(c ,c ,c ,...,c ) s.t. w M, c 2 P . 1 2 3 M w 0 5 length regime are far less clear. Past work by Niesen et ∀ ∈ Nk k ≤ 5 Definition 2: A rate R length N decision rule R = al. in [1] and by us in [2] have addressedthis questionfor 2 ( , ,..., ) is an ordered partition of RN of size theDiscreteMemorylessChannel(DMC)caseandtheAd- 1 2 M 6 R R R M =2NR. . ditive White Gaussian Noise (AWGN) case, respectively. 1 These results are asymptotic and provide scaling laws for Definition 3: The encoding function ENCC : M 30 the block length in terms of the number of nodes. RN for a code C is defined by ENCC(w) = cw, whe→re c is the wth codeword in C. 1 In this paper, we provide a universal non-asymptotic w v: bound on the maximum rate of information transfer for a MDfeofirnaitidoenci4s:ioTnhreuledeRcodisindgefifunnecdtiboyn:DECR : RN → i line networkconsistingofa cascadeof AWGN links. This X complements and improves the asymptotic scaling results DECR(y)=w iff y w, r derived in [2]. The bound derived here is universal in the ∈R a following sense: where w is the wth partition in R. R N 1) Whileweassumethattheblocklengthandencoding Let Ω0 = Γ2(πN22), the solid angle of a N-sphere. Here, ratesareconstantforallthenodes,wedonotassume Γ() is the standard gamma function given by Γ(z) = any particular structure for the channel codes and 0∞· tz−1e−tdt. We also define the following functions: decision rules employed at any of the nodes. Definition 5: Let Z ,...,Z be i.i.d. zero-mean unit- 1 N 2) In addition, no assumption is made on the abso- vRariance Gaussian random variables. Let for any γ 0, ≥ lute/relative magnitudes of the network size and the block length. cot 1 √Nγ+z1 , N z2 >0 It is to be noted that the analysis in [2] was found to Φγ , − (cid:18)√PNl=2zl2(cid:19) l=2 l 0,otherwise. P be unsuitable to our requirement that the bound be non-  asymptotic,andhencewetakeatotallynewapproachhere. Also, let v [0,π] The rest of the paper is organized as follows. In Sec- ∀ ∈ tion II, we introduce the notations and definitions used g(v), (N −1)πN2−1 v(sinθ)N−2dθ in the rest of the paper. In Section III, we introduce Γ N+1 2 Z0 the network and the signal transmission models. We then provide our main result followed by its derivation in We then define the fo(cid:0)llowin(cid:1)g function for x∈[0,Ω0]: Section IV, followed by a short discussion in Section V (x,N,γ),Pr[g(Φ ) x]. (1) γ Q ≥ The abovefunctionis the same as Q∗() definedandused IV. THE MAIN RESULTAND ANALYSIS · by Shannonin [3]. In otherwords, computing (x,N,γ) Q gives the probability that a signal point on the power- The following theorem summarizes our main result. constraint sphere x 2 = NP is displaced by a noise 0 k k Theorem 1: In a line network employing any choice vectorconsistingofi.i.d.zero-meanunit-varianceGaussian of rate R length N codes C ,C ,...,C and rate R random variables in each dimension outside an infinite 0 1 n 1 dimension N decision rules R ,R ,...,R−, right-circular cone of solid angle x whose apex is at the 1 2 n originandaxisrunsthroughtheoriginalsignalpoint.Note M 1 P n W;Wˆ NR 1 M − Ω ,N, 0 . thatΦγ isarandomvariablewhoseprobabilitydistribution I n ≤ − Q M 0 σ2 functionhasN andγ asparameters.Theinversecotangent (cid:16) (cid:17) (cid:20) (cid:18) 0(cid:19)(cid:21) We now delve into the proof of Theorem 1. Let functionis assumedto have[0,π] foritsrangeso thatit is continuous. Noting that cot−1x will then be a decreasing bpiWˆliit|iWeˆsi−in1d(ukc|edj)b,y∀jc,hkan∈neMl endceondoitnegt,hneocisoyndrietcioenpatilopnr,oabnad- function of x, we have the following remark about the monotonicity of the -function w.r.t. γ: decoding at the ith hop. For each hop i, let Pi be the Q M M row-stochastic matrix whose entry in row j and Remark 1: For any γ1,γ2 > 0 s.t. γ1 ≥ γ2 and x ∈ colu×mn k is p (k j). Note that the jth row in [0,Ω0], P gives the coWnˆdi|iWtˆioi−n1al pr|obability mass function on the (x,N,γ ) (x,N,γ ). i Q 2 ≥Q 1 estimateWˆ oftheoriginalmessageW athopi,giventhat i The term g(Φ ) is equal to the solid angle of the cone the message sent by Node i 1 is j. Let γ − formed by rotating the line joining the origin and the n displacedsignalpointaboutthe line joiningthe originand P, P . i the original signal point as the axis. i=1 Y Then, P clearly represents the row-stochastic probability transition matrix between the original message W and the III. NETWORKMODEL message decoded at the sink Wˆ . The transition matrix n P along with p (the probability mass function of the W The line network model to be considered is given in original message W) together induce a joint distribution Fig. 1. There are n+1 nodes in the network identified by between W and Wˆ . Our goal is to find an upper bound n the indices 0,1,2,...,n . The n hops in the network on W;Wˆ , with the constraints given in Section III. are each ass{ociated with}noise variances σ 2 σ2 > I n i ≥ 0 For(cid:16) any M(cid:17) M row-stochastic matrix Q, define 0,1 i n. In other words, the noise variances can × ≤ ≤ ψ(Q p ), W;W˜ , where W˜ is a random variable be different for each link, but they are equal to or greater W | I than a certain minimum σ2 that is strictly positive. Nodes conditionallydep(cid:16)endento(cid:17)nW accordingtotheprobability 0 0,1,...,n 1choosecodesC ,C ,...,C respectively transition matrix Q and W is drawn according to the 0 1 n 1 to transmit−, and Nodes 1,2,...,n choose−decision rules distribution pW (which is the distribution of the message R ,R ,...,R for reception. From here on, for the sake at Node 0). For simplicity, we just write ψ(Q) instead of 1 2 n of simplicity, we let ENCi and DECi to denote ENCCi ψ(Q|pW) for the rest of the paper,assuming throughout and DECR , respectively. All the codes and decision that the specific distribution pW is used. Ultimately, our i rules have the same rate R and block length N. Node final bound is independentof pW. We now have 0 generates a random message W M with probability n distributionp (w)andintendstoco∈nveythesametoNode W;Wˆ = ψ P . (3) W n i n through the noisy multihop path in the network. Each I(cid:16) (cid:17) iY=1 ! nodeestimatesthemessagesentbythenodeintheprevious Beforeproceedingfurthertobound W;Wˆ ,weintro- hop from its noisy observation, encodes the message as I n a codeword, and transmits the resulting codeword to the duce the following useful lemma: (cid:16) (cid:17) n0extihopn. Th1eicsogdievwenorbdytrXanis=miEtteNdCbi(yWˆNi)o,dwehie,refoWrˆainiys SMLemMm,aψ1(:QF1oQr2)an=y0Q. 1 ∈ SM∗×M and any Q2 ∈ ≤ ≤ − × theestimateofthemessageatNodeiafterdecoding(Note Proof: The result follows from noting that for any btfhoyalltoNwWˆosd0ea=ci,oWnfodriitniaonnthyails1dne≤ontsaititiyo≤nfu).nnTcthiisoengotibvhseaentrvdbaeytpioeYnndirs,ecowenhivtiehcdhe SQM∗∈×SMM∗a×nMd ,Qψ2(∈QS)=M×0Ma.nd Q1Q2 ∈ SM∗×M for Q1 ∈ Now for each i, consider β [0,1] such that i codeword X sent by the previous node: ∈ i 1 − 1 ky−xk2 Pi =βiPβi +β¯iPβ¯i, (4) pYi|Xi−1(y|x)= (2πσi2)N2 e− 2σi2 . (2) wInheorteheβr¯iw=ord1s−foβri,eaPcβhii∈, PSMb×eMexpanredssPedβ¯ia∈s aScM∗on×vMex. i The above density function follows from the assumptions combination of two row-stochastic matrices, one of them of AWGN noise and memorylessness of the channel. The being a steady-state matrix. From (3), message Wˆ decoded by Node i is given by Wˆ = i i n n tDhEeCmie(sYsaig).eNdeocteodthedatbtyhethreanfidnoaml svinakri.able Wˆn represents I W;Wˆn =ψ Pi =ψ β1Pβ1+β¯1Pβ¯1 Pi (cid:16) (cid:17) (cid:16)iY=1 (cid:17) (cid:16)(cid:0) (cid:1)iY=2 (cid:17) Node0 Nodei Noden pW W ENC0 X0 pY1|X0 Y1 pYi|Xi−1Yi DECi Wˆi ENCi Xi pYn|Xn−1Yn DECn Wˆn Fig.1: LineNetworkModel. n n (≤a)β1ψ Pβ1 Pi +β¯1ψ Pβ¯1 Pi tAhlelnthfoelleolwemsethnatst (i1n thβat)qcolumQn arfeoreqeuvearlytoj,,saanyd, hqekn.cIet (b)β ψ(cid:16) n PiY=2, (cid:17) (cid:16) iY=2 (cid:17) t(h1a−t β)Mqk q≤ =mi1n,jwQ−ejko.bStkauinm≤mthienjkgdeosivreerdarlelsuklt.and noting ≤ 1 i k=1 k (cid:16)Yi=2 (cid:17) LePt Ci−1 be the code used by Node i−1 and let Ri where (a) follows from the convexity property of mutual be the decision rule used by Node i. As per the argument information w.r.t. the probability transition function, and above, the optimal choice of β for this link will be: i (b) follows from applying the data processing inequality M to the first term and Lemma 1 to the second term. By 1 β = minp (k j) induction, we have: − i j Wˆi|Wˆi−1 | k=1 X W;Wˆ n β ψ(I )=NR n β , (5) = min e−ky−ck2/2σi2dy. (7) I(cid:16) n(cid:17) ≤ iY=1 i! M iY=1 i RX∈Ric∈Ci−1ZR (2πσi2)N/2 where I is the M M identity matrix. The above In other words, we can write M × procedureisbasedonakeyideadevelopedintheproofof β = 1 µ (C ,R ), (8) TheoremV.1 in[1]ina differentcontext.We haveapplied i − σi i−1 i the same to facilitate a useful intermediate result given by where for any σ > 0, rate R length N code C, and rate (5). The remaining portion of the analysis that enables us R dimension N decision rule R, to obtain the final bound involves novel steps. e y c 2/2σ2 We now need to determine how each Pi is to be split µσ(C,R) , mc iCn −(2kπ−σ2k)N/2 dy. (9) in the form given by (4) in an optimal manner, to obtain RX∈R ∈ ZR the best possible bound using this approach. Specifically, Wewouldliketofindalowerboundonµ (C ,R )that we need βi to be as small as possible foreach i. Consider dependsonlyontheparametersN,R,P0σaindσi−0.1Toidoso, the following choice: we need the following three lemmas. Lemma 3 removes M thedependencyofthe boundonthechoiceof thedecision β = 1 minp (k j) rule. Lemma 4 shows that we can restrict our choice of i − j Wˆi|Wˆi−1 | codesto those havingall codewordsthatsatisfy the power k=1 X 1 constraint with equality. For this class of codes, Lemma 5 Pβ¯i;j,k = 1 βi mji′npWˆi|Wˆi−1(k |j′), (6) gives a bound in the desired form, depending solely on where Pβ¯i;j,k denotes−the element on jth row and kth Nfor,Rbr,ePv0itya.nd σ0. From now on, we denote (2πσ02)N2 by η column of the matrix P . Note that this matrix consists β¯i Lemma 3: Let C =(c ,c ,...,c ) be a given rate R of identical rows, where each entry in any row is equal 1 2 M length N code. Further, let R (C) be the decision rule to the smallest element in the corresponding column of ∗ Pi scaled by a normalizing factor. The other matrix Pβi given by (R∗1,R∗2,...,R∗M) where for 1≤i≤M, is determined by substituting these expressions for β aPnβ¯di Pdeβ¯tierimntione(d4)t.huNsotweiltlhabtethsetoctwhaostimcaftroirceasnyPβii, aannddi Then,Rfo∗ir a=ny r(cid:26)atye R∈RdiNme|nisi=onarNgim′deacxiskiyon−ruclie′kR(cid:27),. that β [0,1]. Hence, we can obtain P as a convex i i ∈ combination of two stochastic matrices in this manner for µ (C,R) µ (C,R (C)). σ σ ∗ any i. The following lemma shows that the value of β ≥ i provided in (6) is the best possible value for the purpose Proof:ForanycodeC anddecisionruleR,wehave: of the bound in (5). 2 Lemma 2: Let Q = [Q ] S , Q ηµσ(C,R)= min e−ky−2σc2jk dy tShaMt×QM=,Qβ2Q∈1+S(1M∗−×βM)Q, a2n.dThleejtnk,ββ∈≥∈[01,−1]MbMke×=cM1hmosiennj1Qsujck∈h. ≥RX∈R j mZjiRne−ky−2σc2jk2dy ospfotnhdPeirnmogoafte:rlieSxmin(e1cne−tQiβn)=QQβ.2QCmo1un+sstid(b1eer−samnβay)lQlcePor2l,tuhemavnenrytkheeolecfmoQreren-t. =RX∈RmZRine−ky−2σc2jk2dy 2 ZRN j = M mine−ky−2σc2jk2dy M min e−ky2−σc2k2dy Xk=M1ZR∗k j 2 ≥kXk6==k10c∈C1ZR∗k (=a) e−ky−2σc2kk dy + min e−ky2−σc2k2dy Xk=1ZR∗k c∈C1ZR∗k0 (=b) M min e−ky−2σc2jk2dy =ηµσ(C1,R∗(C)) kX=1 j ZR∗k (e)ηµσ(C1,R∗(C1)). (11) =ηµ (C,R (C)). (10) ≥ σ ∗ Here, (c) follows from (b) in the proof of Lemma 3, (d) Here,(a)and(b)followfromth2edefinitionofR∗(C):for follows from the construction of c′k0 so that for every any y∈R∗k,argminje−ky−2σc2jk =k. fyrom∈ LRem∗k0m,kay3−. Ncokt0ekal≤so tkhyat−wec′hk0akv,e aonndly(ter)eaftoedllotwhes case where is non-empty. If on the other hand, that It is useful to note that the decision rule R∗(C) given by decision regRio∗kn0was empty, we can move the codeword theabovelemmaisthesameasthe(M 1)th-orderVoronoi − at c along any arbitrary direction. For such a case partitioning (called “farthest-point Voronoi partitioning”, k0 inequality (b) becomes an equality since the integrals see Section 3.3 in [4]) of RN w.r.t. C. over would be zero. From a given code C, we can thus Rob∗kt0ain a code C having one more codeword on Lemma 4: Let C be a code satisfying c 2 1 µtNhaPt(0C,∀∀c,c′R∈∈(CCC′),).k.cT′kh2en=, thNerPe0eaxnisdtsµaσ(Cco,dRe∗kC(Ck′ )s)uc≥≤h µtphrσeo(csCeusr,sfRasce∗ev(eoCrfa)lt)htie≥mpeoµswσtoe(rC-ec1vo,enRnsttu∗raa(liClny1t)os)pb.htaWeirnee,acaaclnsoodreespaCteias′ftywtihintihgs σ ′ ∗ ′ all codewords on the power constraint sphere. Proof: Let C = (c ,...,c ) and let R (C) = Lemma 5: For any rate R length N code C satisfying ( ,..., ). Consider a1 codewMord that lies st∗rictly in- the power constraint c C, c 2 =NP0, R∗1 R∗M ∀ ∈ k k sstiadteemtheenbtaollfkthxek2le<mmNaPi0s.tIrfivnioallsyucthruceowdeitwhoCrd′e=xisCts.,Fthoer µσ(C,R∗(C))≥Q MM−1Ω0,N,Pσ20 . the non-trivial case, we can assume that such a codeword (cid:18) (cid:19) exists. Let ck0 be that codeword. Consider the decision Proof: For any code C that satisfies the requirements RRreg∗∗i((oCCn))R.isT∗k0haeV=roergo(cid:8)inyoon∈i tReRs∗ksN0el(|aitkfio0nn=onoa-feromgrdpmetyar)xMii′skcyo1−nvacenxid′kss(cid:9)iinncc∈ee ooofuftpthytoeramlienmifidnmsitawy,itt(hhseetehdeAeircpiapspieoennxdirxaetgAtihoenfsoorrinigaiRnpr∗aon(oCdf)).exActeosnnsdusiimsntges − Voronoicellsofanyorderareconvexregions(seeProperty that each of these regions M cut out a surface OK.1 in Section 3.2 of [4]). Hence, there exists a unique of area Ω on the unit N-sp{hRe∗kre}kc=e1ntered at the origin. k lapatotticenkrt,0ztkah0leoinndgisRttha∗kne0clneinefaerroejmostinttiohnegckcc0ok.d0Beawynomdrdozvktio0ngaewvtheaeryycfoprdooeminwtothirnde NmCooentsessaidtgheeartka,nfotyhreteedramecchiisnicootnhdeerewsguoimordnmRcakt∗kio∈cnonoCtfatihnceosrterhexesppproeosnisndition−ngcfotkor. R∗k0 is increased. We continue thus until the codeword µσ(C,R∗(C)): is moved to the surface of the power-constraint sphere, at say c′k0. Let us call the resulting code C1. Note that min e−ky−cjk2/2σ2dy = e−ky−ckk2/2σ2dy. C\{ck0}=C\{c′k0}. Now consider j ZR∗k (2πσ2)N2 ZR∗k (2πσ2)N2 M 2 The right hand side of the above equation is equal to the ηµσ(C,R∗(C))=Xk=1mjinZR∗ke−ky−2σc2jk dy pRrNobalobcilaitteydoaftthcekeovnentthEe1spthhaetrethextra2n=smNittPed0cisoddeiwspolradceind (=c) M e−ky−2σc2kk2dy tbhyetphoeinnotisecve.cNtoorwinctoonasisdpeerctihfiekcprkergoiboanbiRlit∗kytohfatthceonetvaeinnst Xk=1ZR∗k E2 that th−e skame transmitted codeword is displaced into = M e−ky−2σc2kk2dy tohreigNin,-daixmisenrusnionninalgctihrrcouulagrhconce,Cak∗ntdhactuthtainsgitosuatpaexsuartfathcee kXk==k10ZR∗k ofareaΩk ontheunitsphere−cenkteredattheorigin(i.e.,the 6 2 solid angleof the N-dimensionalcircularcone is Ω ). We + e−ky−2cσk20k dy claim that the probability of E1 cannot be smallekr than ZR∗k0 the probability of E2. A proof of this claim is provided (d) M e−ky−2σc2kk2dy tion AppΩendixΩB.,NTh,Pe p/rσob2a,biflriotymotfhethedeefivneintitonE2ofistheequa-l ≥ ∗ Q 0− k 0 Q kXk==k10ZRk functi(cid:0)on in Definition 5. H(cid:1)ence, 6 + ∗ e−ky−2cσ′k20k2dy µσ(C,R∗(C)) = M mjin ∗ e(k2yπ−σcj2k)2N/2/σ22dy ZRk0 Xk=1 ZRk M P We now investigate how these two bounds compare when Ω Ω ,N, 0 . (12) ≥ Q 0− k σ2 N is asymptotically large, by comparing the values of kX=1 (cid:18) (cid:19) E(S) and E (R,S) where S = P /σ2. To do so we as 0 0 Notingthat isaconvexfunctionofΩ0 Ωk (SeeSection obtained E (R,S) as a function of R and S using the Q − as III in [3]), we apply Jensen’s inequality to (12): asymptotic analysis in [3]. The expression for E (R,S) as M is given below, with a justification in Appendix C: 1 P µ (C,R (C)) M Ω Ω ,N, 0 σ ∗ ≥ M kX=1Q(cid:18) 0− k σ2(cid:19) Eas(R,S)=S− 12√SGcosθ−log(Gsinθ), M (Ω Ω ) P M k=1 0− k ,N, 0 where ≥ Q P M σ2! G= 1 √Scosθ+ 4+Scos2θ , M 1 P 2 = MQ(cid:18) M− Ω0,N, σ20(cid:19), (13) and θ = π sin(cid:16)−12−R. Shopwn in Fig. 2(cid:17)is a plot of − since M Ω =Ω . Eas(R,S)/E(S) as a function of S, repeated for various k=1 k 0 R. As can be seen from the plot E (R,S) is always We are now ready to prove Theorem 1. as P smaller than E(S) for any R and S, thus showing that Proof of Theorem 1: Consider any link i. For the code C satisfying c 2 NP , c C , we can the bound obtained in Theorem 1 is tighter than the one i 1 0 i 1 apply Le−mma 4 to conkstrkuct≤another∀code∈C − such that given by (15). The current bound is also seen to be better c C , c 2 = NP and µ (C ,Ri′−1(C )) when the SNR S is not very high. ∀µ ∈C i′−,1Rk kC . W0e then hσaive:i−1 ∗ i−1 ≥ σi i′ 1 ∗ i′ 1 − − 1 (cid:0) (cid:0) (cid:1)((cid:1)a) R = 0.25 µ (C ,R ) µ (C ,R (C )) 0.9 R = 0.5 σi i−1 i ≥ σi i−1 ∗ i−1 )S R = 0.75 (≥b) µσi Ci′−1,R∗ Ci′−1 )(S/E 00..78 RR == 12..00 (≥c) MQ(cid:0) MM−1Ω(cid:0)0,N,(cid:1)Pσ(cid:1)20 (R,as 00..56 (cid:18) i (cid:19) E (d) M 1 P 0.4 ≥ MQ M− Ω0,N, σ20 .(14) 0.3 (cid:18) 0(cid:19) 0−.21 0 −5 0 5 10 15 20 Intheabovechainofequations(a),(b),and(c)followfrom Signal-to-NoiseRatio,SindB Lemma 3, Lemma 4 (as discussed above), and Lemma 5 Fig. 2:Comparisonofexponents forlarge N. respectively. Inequality (d) follows from Remark 1, since σ2 σ2. Recalling that β = 1 µ (C ,R ) and i ≥ 0 i − σi i−1 i applying (14) to (5), we have the desired result: REFERENCES n W;Wˆ NR 1 M M −1Ω ,N,P0 . [1] U. Niesen, C. Fragouli, and D. Tuninetti, “On capacity of line I n ≤ − Q M 0 σ2 networks,” IEEETrans. Inform. Theory, vol. 53, no. 11, pp. 4039– (cid:16) (cid:17) (cid:20) (cid:18) 0(cid:19)(cid:21) 4058,Nov.2007. [2] R.Subramanian, “Therelation between block length andreliability foracascadeofawgnlinks,” Proceedingsofthe2012International ZurichSeminaronCommunications(IZS),pp.71–74,Feb.29–Mar. V. DISCUSSION 2,2012. [3] Claude E. Shannon, “Probability of error for optimal codes in a We hadmentionedinSectionIthattheboundpresented gaussian channel,” Claude Elwood Shannon: Collected Papers, pp. in the currentpaperimprovesand complementsthe bound 279–324,1993. provided by [2]. In this section, we demonstrate this fact [4] AtsuyukiOkabe,BarryBoots,andKokichiSugihara, Spatialtessel- lations: concepts and applications of Voronoi diagrams, chapter 3, with a comparison plot. The bound given by [2] is: JohnWiley&Sons,Inc.,NewYork,NY,USA,1992. W;Wˆn 2NR 1 e−NE(P0/σ02) n(1−ǫ),(15) I ≤ − (cid:16) (cid:17) (cid:16) (cid:17) where for any S 0, ≥ (S+2)+ (S+2)2 4 E(S) , − 4 p 1 + log (S+2)+ (S+2)2 4 . 2 − for asymptotically largne n. The apbove bound doecays with n as 1 e−NE(P0/σ02) for any code rate R, − while the b(cid:16)ound given by(cid:17)Theorem 1 decays as 1 M M 1Ω ,N,P0 . Now, let − Q M− 0 σ02 (cid:16) (cid:16) (cid:17)(cid:17) log M M 1Ω ,N,S E (R,S), lim Q M− 0 . as −N→∞ (cid:0) (cid:0) N (cid:1)(cid:1) APPENDIXA Thisinturnmeansthatthe integralof thedensityfunction THEFARTHEST-POINTVORONOI TESSELLATIONFOR of the Gaussian noise vector with center at ck over the POINTS ON ASPHERE volume R∗k is greater than the integral over the volume Lemma 6: Given any M,N N, the non-empty cells Ck∗. The former is the probability of the event E1 and the in the farthest-point Voronoi tes∈sellation of RN w.r.t. any latter is the probability of the event E2. set of M points on an N-sphere of radius A > 0 are all semi-infinite pyramids. APPENDIXC Proof:Considerthefarthest-pointVoronoitessellation ASYMPTOTIC EXPONENTIALDECAY OFTHE Q ofRN w.r.t.acodeC =(c ,...,c )s.t. c 2 =A2,1 FUNCTION WITHN 1 M i i M. Consider any x RN and assukmekwithout lo≤ss Though it is hard to express in terms of elementary ≤ ∈ Q of generality that it is contained in , the farthest-point functions, it is easy to obtain asymptotic approximations R∗1 Voronoi cell for c′1. In that case, we have for all i s.t. when the block length N is very large. The idea is to use 1 i M, Shannon’s computation of the sphere-packing exponent. ≤ ≤ Shannon derives a bound on (.) as a function of the kc1−xk2 ≥ kci−xk2 coneangleθ [0,π]insteadofQthesolidangleΩsincethis c1 2+ x 2 2 c1,x ci 2+ x 2 2 ci,x makes asymp∈totic analysis easier. This results in a bound ⇒k k k k − h i ≥ k k k k − h i c ,x c ,x . (16) for (.)thatdecaysexponentiallyinN,withtheexponent 1 i ⇒−h i ≥ −h i Q being Consider any α 0. We claim that αx is also contained ≥ tihneRe∗1x.paTnhsiisoncaonfbkec1sh−owαnxkto2:be true by applying (16) to EL(θ)= 2Pσ002 − 21sPσ020Gcosθ−log(Gsinθ), c αx 2 = c 2+α2 x 2 2α c ,x 1 1 1 where k − k k k k k − h i c 2+α2 x 2 2α c ,x i i ≥ k k k k − h i 1 P P = c αx 2, G= 0 cosθ+ 4+ 0 cos2θ . k i− k 2 sσ02 s σ02 ! for all i s.t. 1 i M. Hence, αx is also contained in ≤ ≤ The bound on (.) for a given Ω can then be evaluated R∗1. Generalizingthis, we have shownthat anynon-empty numerically orQby any other means, since there is a one- Voronoicellthatcontainsa pointxalso containsthepoint to-one correspondencebetween the cone angle θ and the αxforanyα 0.Sucharegionisasemi-infinitepyramid 0 ≥ solid angle Ω (see Fig. 4): by definition. APPENDIXB Ω(θ0)= (NΓ−N1)+π1N2−1 θ0(sinθ0)N−2dθ0. 2 Z0 PROOF OF THECLAIM INLEMMA5 The particular case o(cid:0)f inte(cid:1)rest in [3] is 1 Ω ,N,P0 , Consider the N 1 dimensional cross-section of the Q M 0 σ02 pyramid R∗k cut ou−t by a sphere of radius R centered at which corresponds to the cone angle θ(cid:16)= sin−12−(cid:17)R theorigin.Thiswillbeanarbitrarysphericalpolygon.The and the sphere-packing lower bound is obtained thus cross-section of the cone by the same sphere will be (see pages 620 and 625 in [3]). Our bound involves Ck∗ a spherical cap with its center at c . The axis of the M 1Ω ,N,P0 instead, and hence we will have to cone cuts throughthe spherical cap−at ikts center. The non- Qeva(cid:16)luaM−te th0e expσo02n(cid:17)ent EL(θ) with θ = π sin−12−R overlappingregionsof such a sphericalcap and a polygon − instead, giving us the following result: are illustrated in Fig. 3. Since the both the cross sections havethesamesurfaceareaRNΩ ,thesurfaceareasofthe k non-overlappingpartsof boththe cross-sections(indicated as A and A and by two different shadings in Fig. 3) 1 2 are equal. Now, every point in the shaded region A1 on θ0 Ω(θ0) the polygon is nearer to c than any point in A is to c . k 2 k This is because the former lies outside the spherical cap centered at c while the latter is inside the same. This k − in turn implies that the angle θ [0,π] between the axis 2 of the cone and the line joining∈any point y on A and Fig.4: Relation betweensolidangleandconeangle. 2 2 the origin is smaller than the angle θ [0,π] between 1 ∈ othreigainx,isasansdhotwhenliinnethjeoirniignhgtahnayndposiidnet yo1f Foing.A31.aTnhdisthine I W;Wˆn / 1−e−NEas(R,P0/σ02) n, (17) turn implies that y1 is closer to ck than y2 is to ck, as where E(cid:16)as(R,S(cid:17)) is a(cid:16)s shown in (18). (cid:17) shown below: c y 2 = NP +R2+2R NP cosθ k 1 0 0 1 k − k NP0+R2+2RpNP0cosθ2 ≤ = ck y2 2. p k − k y1 R y2 Axis R A2 √NP0 ck O θ2 θ1 A1 Fig.3:Illustrationofthefactthatthenon-overlappingportionofthepyramidalcross-sectionisnearertotheoriginalcodewordthanthenon-overlapping portionoftheconical crosssection. S 22R+2 E (R,S) = (22R+1)+(22R 1) 1+ as 22R+2 − s (22R 1)S! − 1 S 22R+2 + log 22R+ (22R 1) 1+ +1 Rlog2. (18) 2 " 2 − s (22R 1)S !#− −

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.