ebook img

Blind decoding of Linear Gaussian channels with ISI, capacity, error exponent, universality PDF

0.16 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Blind decoding of Linear Gaussian channels with ISI, capacity, error exponent, universality

1 Blind decoding of Linear Gaussian channels with ISI, capacity, error exponent, universality Farkas Lóránt Abstract—A new straightforward universal blind detection We will show that our new method is not only universal, algorithm for linear Gaussian channel with ISI is given. A new not depending on the channel, but its error exponent is better errorexponentisderived,whichisbetterthanGallager’srandom in many cases than Gallager’s [6] lower bound for the case coding error exponent. when complete CSI is available to the receiver. Previously, 8 Gallager’s error exponent has been improved for some chan- 0 I. INTRODUCTION nels, using an MMI decoder, such as for discrete memoryless 0 multiple-access channels [7]. 2 In this paper, the discrete Gaussian channel with intersym- We don’t use Generalised Maximum likelihood decoding bol interference (ISI) n [9], but a generalised version of MMI decoding. This is done a l by firstly approximatingthe channelparametersby maximum J y = x h +z (1) i i−j j i likelihood estimation, and then adopt the message whose 3 Xj=0 mutualinformationwiththeestimatedparametersismaximal. ] will be considered, where the vector h = (h0,h1,...,hl) By using an extension of the powerful method of types, we T represents the ISI, and {z } is white Gaussian noise with can simply derive the capacity region, and random coding i .I variance σ2 error exponent. At the end, we have a more general result, s A similar continuous time model has been studied in Gal- namely: We show how the method of types can be extended c [ lager [6]. He showed that it could be reduced to the form to a continuous, non memoryless environment. The structure of this correspondence is as follows. In 1 y =v x +w (2) n n n n Section II we generalise typical sequences to ISI channel v 0 where the v are eigenvalues of the correlation operator. The environment. The main goal of section III is to give a new n 4 methodofblinddetection.InsectionIVweshowbynumerical same is true also for the discrete model (1), but the reduc- 5 results, that for some parameters the new error exponent tion requires knowledge of the covariance matrix R(i,j) = 0 . l h(k−i)h(k−j) whose eigenvectorsshouldbe used as is better than Gallager’s random coding error exponent. In 1 k=0 Section V we discuss the result, and give a general formula new basis vectors. 0 P to the channels with fading. Here however such knowledge will not be assumed, our 8 0 goal is to study universal coding for the class of ISI channels : of form (1). II. DEFINITIONS v i As a motivation , note that the alternate method of first Letγ bea sequenceofpositivenumberswith limit0. The X n identifying the channel by transmitting a known “training sequencex∈Rn isγ -typicaltoann-dimensionalcontinuous n r sequences” has some drawbacks. Because the length of the distribution P, denoted by x∈T , if a P trainingsequenceis limited,theestimation ofthe channelcan |−log(p(x))−H(P)| beimprecise,andthedatasequenceisthusdecodedaccording <γ (3) n to an incorrectlikelihood function. This results in an increase n in error rates[2], [3] and in a decreasein capacity[4]. As the where p(x) denotes the density function, and H(P) the trainingsequencecontainsnovaluableinformation,thelonger differential entropy of P. it is the less information bits can be carried. Similarlysequencesx∈Rn,y ∈Rn+l arejointlyγ -typical n Onecanthinkthisproblemcouldbesolvedbychoosingthe to 2n+l dimensionaljointrandomdistributionP denoted X,Y training sequence sufficiently large to ensure precise channel by (x,y)∈T , if PXY estimation, and then choose the data block sufficiently long, −log(p (y,x))−H(p ) <nγ butthissolutionseldomworksduetothedelayconstraint,and X,Y X,Y n to the slow change in time of the channel. In the sa(cid:12)me way, a sequence y is γ -t(cid:12)ypical to the con- (cid:12) n (cid:12) So we will give a straightforward method of coding and ditional distribution P , given that X = x, denoted by Y|X decoding, without any Channel Side Information (CSI) To y ∈T (x) if achieve this, we generalise the result of Csiszár and Körner PY|X [5] to Gaussian channel with white noise and ISI, using an −log(p (y|x))−H(Y|X) <nγ Y|X n analogue of the Maximal Mutual Information(MMI) decoder For simp(cid:12)lifying the proof, in the follo(cid:12)wing, P = P is in [5]. (cid:12) (cid:12) X always the n dimensional i.i.d. standard normal distribution, ThanksforImreCsiszárforhisnumerouscorrection inthiswork. the optimal input distribution of a Gaussian channel with 2 powerconstrain1.Theconditionaldistributionwillbe chosen III. LEMMAS, THEOREM as P(n,σ) with density Y|X We summarise the result of this section: The first Lemma shows that the above definition of ISI type is consistent, 1 exp(−|y−h∗x|2) in the sense that y is conditionally Ph,σ typical given x, σn(2π)(n+ln)/2 2σ2 at least in the case when ky − h ∗ xYk|2Xis not too large. Lemma2 givesthe propertieswhich we needfor ourmethod, where h = (h ,h ,h ,...,h ) and (h∗x) = l x h 0 1 2 l i j=0 i−j j and proves that almost all randomly generated sequences has where x =0 is understood for k <0. So in this case k these properties. Lemma 4 gives an upper bound to the set P of output signals, which are “problematic” thus typical to ln(2πe) H (Y|X)=H(h∗X+Z|X)=H(Z)=n[ +ln(σ)] two codewords, namely they can be result of two different h,σ 2 codeword with different channel. Lemma 5 shows that if the The limit of the entropy of Y =X ∗h+Z as n→∞, is channel parameters estimated via maximum likelihood (ML), thecodewordsandthenoisecannotbeverycorrelated.Lemma H (Y) 1 1 2π 6 gives the formula of the probability of the event that an h,σ lim = ln(2πe)+ ln(σ+f(ξ))dξ n→∞ n 2 2π outputsequenceistypicalwithanothercodewordwithrespect Z0 toanotherchannel.AllLemmasareusedinTheorem1,which where f(λ) = ∞ ( l−|k|h h )eikλ see [1] (here givesthemainresult,anddefinesthedetectionmethodstrictly. k=−∞ j=0 j j+|k| Rm,n =r(m−nP)=r(k)=P jl−=|0k|hjhj+|k| isthecorrelation Lemma 1 matrix). So the limit of the average mutual information per When P symbol, that is ky−h(i)∗x k2 | i −σ(i)2|<γ n n H (Y)−H (Y|X) lim In(h,σ)= lim h,σ h,σ n→∞ n→∞ n so the detected variances is in the interior of the set of approximating variances, then is equal to y ∈T (x ) 1 2π f(ξ) PYh(|Xi),σ(i) i I(h,σ)⊜ ln 1+ dξ Proof: Indeed, if 2π σ Z0 (cid:18) (cid:19) ky−h(i)∗x k2 moreover the sequence In(h,σ) is non-increasing (see [1]). | i −σ(i)2|<γn n We will consider a finite set of channels that grows subex- ponentially with n, and in the limit dense in the set of all ISI then channels. To this end, define the set of approximating ISI, as ky−h(i)∗x k2 n i − <nγ . (cid:12) 2σ(i)2 2(cid:12) n H ={h∈Rln :h =k γ ,|h |<P , k ∈Z, (cid:12) (cid:12) n i i n i n i (cid:12) (cid:12) With (cid:12) (cid:12) ∀i∈{1,2,...,ln}} (cid:12) (cid:12) n ky−h(i)∗x k2 wherel isthelengthoftheISI,P isthepowerconstraintper −log(Ph(i),σ(i)(y|x ))= log(2πσ(i)2)+ i n n Y|X i 2 2σ(i)2 symbol, and γ is the “coarse graining”,intuitively the preci- n sionofdetection.Similarlywedefinethesetofapproximating variances as we get Vn ={σ ∈R:σ =kγn,1/2<|σ|<Pn, k ∈Z+ |−log(Ph(i),σ(i)(y|x ))−H (Y|X)|= ∀i∈{1,2,...,l }} Y|X i PYh(|iX),σ(i) n ky−h(i)∗x k2 n i − These two sets form the approximating set of parameters, (cid:12) 2σ(i)2 2(cid:12) denoted by S =H ×V . (cid:12) (cid:12) n n n (cid:12) (cid:12) Below we set ln =[log2(n)], Pn =n116, γn =n−41 and by the definition y ∈TP(cid:12)(cid:12)h(i),σ(i)(xi) if (cid:12)(cid:12) Y|x 1 Definition The ISI type of a pair (x,y) ∈ Rn×Rn+ln is the pair (hn,σn)∈Sn defined by |−log(PYh|(Xi),σ(i)(y|xi))−HPYh(|Xi),σ(i)(Y|X)|<nγn h(i)=argmin ky−h(i)∗x k h(i)∈Hn i ky−h(i)∗x k2 Lemma 2 σ(i)2 =argmin |σ(i)2− min i | σ(i)∈Vn h(i)∈Hn n For arbitrarily small δ >0, if n is large enough, there exists Note that this type concept does not apply to separate input a set A ⊂ Tn, with Pn(A) > 1 − δ, where P is the n- P or output sequences, only to pairs (x,y). dimensional standard normal distribution, such that for all 3 x∈A ,k,l∈{0,1,...,l } k 6=l Lemma 4 n For all R > 0,δ > 0,, there exist at least 2n(R−δ) different n x x j=0 j−k j−l <γ (4) sequencesinRn whichareelementsofthesetAfrom Lemma n (cid:12)(cid:12)P n (cid:12)(cid:12) 2,andforeachpairofISIchannelswithh,hˆ ∈Hn,σ,σˆ ∈Vn, (cid:12)(cid:12)(cid:12) nj=0xj−kxj−k(cid:12)(cid:12)(cid:12)−1 <γ (5) and for all i∈{1,2,...,M} n (cid:12)P n (cid:12) (cid:12) (cid:12) (cid:12) 1 (cid:12) (cid:12)− lnp(x)−1/nH((cid:12)P) <γ (6) λ T (x )∩ T (x ) ≤ (cid:12) n (cid:12) n  P(h,σ) i P(hˆ,σˆ) j  Proof: Ta(cid:12)(cid:12)ke n i.i.d, standard Gauss(cid:12)(cid:12)ian random variables Y|X j[6=i Y|X X1,X2,...,Xn(cid:12)(cid:12). Fix k,l 1 < k,l <(cid:12)(cid:12) n. By Chebishev’s 2−[n(|I(hˆ,σˆ)−R|+)−Hh,σ(Y|X)]  (7) inequality, provided that n≥n (n,m,δ) 0 Pr ni=1Xi−kXi−l >ξ 1 < 1 Proof: Weshallusethemethodofrandomselection.For ((cid:12)P n (cid:12) rn) ξ2 fixed n,m constans , let Cm be the family of all ordered col- (cid:12) (cid:12) lectionsC ={x ,x ,...,x }, ofm notnecessarilydifferent From this, with(cid:12)(cid:12)ξ =γnn12 (cid:12)(cid:12) sequences in A.1No2tice thatmif some C ={x ,x ,...,x }∈ 1 2 m Pr(cid:26)(cid:12)Pni=1Xni−kXi−l(cid:12)>γn(cid:27)< (γnn112)2 =δn (lCehmc,tσios)an,t(iCshˆfi,∈eσˆs)C,(7th,)edfneonrxoite’evstehareryeleinfeta-cnhedasnspdarasiilirydeodfoisfGt(in7ac)utsb.syiFauonr(cCahn,ayhn,nchˆoel)ls-. (cid:12) (cid:12) m i Which mea(cid:12)ns that, there exist(cid:12)a set in Rn whose Pn measure Since for x∈T (cid:12) (cid:12) P is at least 1−δ and for all sequences from this set it is true n that n x x λ{TPYh,|σX(x)}≤2n(Hh,σ(Y|X)−γn) j=0 j−k j−l <γ (cid:12)P n (cid:12) n from Lemma 3, a C ∈Cm satisfy (7), if for all i,h,ˆh (cid:12) (cid:12) Similarlythereexis(cid:12)(cid:12)tsuchsetsforall(cid:12)(cid:12)k 6=l in{0,1,...,l }× u (C)⊜ u (C,h,hˆ) (cid:12) (cid:12) n i i {0,1,...,ln}. By a completely analogous procedure we can h,Xhˆ∈H makesetswhichsatisfy5and6.Theintersectionofthesesets ·2n[I(hˆ,σˆ)−R]−Hh,σ(Y|X) P-measure at least 1−2δ (l2+1). As δ l2 →0, this proves n n n n the Lemma. is at most 1, for every i. The Lebesgue measure will be denotedby λ; its dimension is Notice that, if C ∈C m not specified, it will be always clear what it is. m 1 Lemma 3 u (C)≤1/2 (8) i m If n is large enough then the set A in Lemma 2 satisfies i=1 X 2H(P)−2nγn <λ(A)<2H(P)+nγn then ui(C) ≤ 1 for at least m2 i indices i. Further, if C′ is the subcollection, states the above indexes, then u (C′) ≤ i And for any m-dimensional continuous distribution Q(·) u (C)≤1 for every such index i. Hence the Lemma will be i λ(T )<2H(Q)+nγn proved, if for an m with Q where T is the set of typical sequences to Q, see (3) 2·2n(R−δ) ≤m≤2n(R−δ2) (9) Q Proof: Since we find a C ∈C which satisfy 8. m Choose C ∈C at random, according to uniform distribu- 1>P(A)= p(x)λ(dx)>1−δ m tion from A. In other words, let Wm = (W ,W ,...,W ) ZA 1 2 m be independent RV’s, each uniformly distributed over A. In by the previous Lemma, by using 2−(HPX−nγn) > p(x) > order to prove that 8 is true for some C ∈ C , it suffices to 2−(HPX+nγn) on TPX, and A⊂T, we get show that m 2H(P)+nγn) >λ(A)>(1−δ)2H(P)−nγn) >2H(P)−2nγn) 1 Eu (Wm)≤ i=1,2,...,m (10) if n is large enough. Similarly from i 2 1>Q(TQ)= q(x)λ(dx) To this end, we bound Eui(Wm,h,ˆh). Recalling that, ZTQ ui(C,h,hˆ) denotes the left-hand side of 7, we have 2H(Q)−nγn >λ(TQ) Eu (Wm,h,ˆh)= (11) i Pr{y ∈T (W )∩ T (W )} (12) [5]The next lemma is an analogue of the Packing Lemma in y∈ZR PYh,|σX i j[6=i PYhˆ,|σXˆ j 4 As the W are independent identically distributed the proba- Now, j bility the integration is bounded above by n ln ky−ˆh∗x k2 = (y − ˆh x )2 = j:j6=iPr{y ∈TPYh,|σX(Wi)∩TPYhˆ,|σXˆ (Wj)}= (13) i Xj=1 j Xg=0 g j−g X (m−1)·Pr{y ∈T (W )}·Pr{y ∈T (W )} (14) n ln PYh,|σX i PYhˆ,|σXˆ j = (yj − h˜gxj−g −γnxj−k)2 = j=1 g=0 As the W ’s are uniformlydistributedoverA, we haveforall X X j n n fixed y ∈Rn = (z −γ x )2 = (z2−2γ z x +γ2x2 ) j n j−k j n j j−k n j−k j=1 j=1 X X λ{x:x∈T ,y ∈T (x)} PX Ph,σ Pr{y ∈T (Z )}= Y|X On account of (4), Ph,σ i λ{A} Y|X Thesetintheenumeratorisnon-voidonlyify ∈TPh,σ.Inthis ≤kzik2−2nγnλk+γn2(n+γn)≤kzik2−(n−γn)γn2 case it can be written as TP¯X|Y(y), where P¯ is a cYonditional =ky−˜h∗xik−nγn2+γn3 <ky−h˜∗xik distribution, which P (a)Ph,σ (b|a)=Ph,σ(b)P¯ (a|b) X Y|X Y X|Y Lemma 6 Let δ >0, and x∈A from Lemma 2. Let h,σ ∈H×V and Thus by Lemma 3, and Lemma 2 ho,σo ∈ H ×V be two arbitrarily (ISI function, variance) 2Hh,σ(X|Y)+nγn pairs. Let y and x be such that Pr{y ∈T (Z )}≤ PYh,|σX i 2H(X)−2nγn) =2−n(I(h,σ)−3γn) y ∈TYh|,Xσ(x) (15) h=argmin ky−h∗xk (16) h∈H(n) If y ∈ T , and Pr{y ∈ T (W )} = 0 otherwise. So, if PYh,σ PYh,|σX i σ2 =ky−h∗xk/n (17) we upper bound λ(TPh,σ) by 2Hh,σ(Y)+nγn - with the use of Lemma 3 - from (14),Y(12) and (9) we get, Then Eui(Wm,h,ˆh)≤ pho,σo(y|x)≤2−n[d((h,σ)k(ho,σo))−δ]−Hh,σ(Y|X) Y|X λ(T )(m−1)2−n[I(h,σ)+I(hˆ,σˆ)−6γn] Ph,σ ≤2−Yn[I(hˆ,σˆ)−R+δ−7γn]+Hh,σ(Y|X) aHnerienfdo(r(mha,tσio)nk(dhivoe,rσgoe)n)c=efo−r21Glaougs(sσσio22o)n−dis12tr+ibuσt2io+n2ks∗h,σ−po2hooski2tivies if (h,σ)6=(ho,σo). Let n be so large that 7γ <δ/2, then we get n Proof: Eu (Wm)≤|H2||V2|2−n(δ/2) i n n −n −1log PYho|X,σo(y|xi) +log(Ph,σ(y|x ))] which proves (10) Pho,σo(y|x )=2 " n PYh,|σX(y|xi) !# Y|X i Y|X i (18) Lemma 5 For x ∈ A from Lemma 2, and y as is (1), and h˜ = argminh∈H(n)ky−h∗xk, and z =y−h˜∗x, and y ∈TYh|,Xσ(x) by the definition, so: nj=1zjxj−k <γn k ∈{0...ln} −log(PYh,|σX(y|xi))≥Hh,σ(Y|X)−γn >Hh,σ(Y|X)− 3δ n Proof: (PIndirect) Suppose that (19) n z x if n is large enough. With this: j=1 j j−k =λ >γ k n n for some k ∈{0..P.ln}. Then let log PPYhho|X,,σσo((yy||xx)i)!=log(2Π)11n2σoneexxpp((−−kkyy−−2hhσ∗∗o2xxkk22))= h˜ ifj 6=k Y|X (2Π)n2σn 2σ2  hˆj =(h˜j +jγnifj =k nlog( σ )+ ky−h∗xk2 −ky−ho∗xk2 ≤  σ 2σ2 2σ2 o o Wewillshow,thatky−hˆ∗xik<ky−h˜∗xik,whichcontradicts nlog(σ2)+ n+nγn − ky−ho∗xk2 (20) to the definition of h˜). 2 σo2 2 2∗σo2 5 Introduce the following notation z =y−h∗x, Then where d((h,σ)kho,σo)) is the information divergence (6), ky−ho∗xk2 =kz+h∗x−ho∗xk2 = Remark 1 The expression minimised above is a continuous function of n ln = (z + (h −ho)x )2 = ho,σo,h,σ,R j k k j−k j=1 k=0 Proof: Let δ =ε/3, and let X X n ln ln (zj2+2zj (hk−hok)xj−k +( (hk−hok)xj−k)2 C ={x1,x2,...,xM} Xj=1 kX=0 Xk=0 the set of codesequences from Lemma 4, so M ≥ 2n(R−δ). ln n The coding function sends the i-th codeword for message i, =kzk+2 (h −ho) z x + k k j j−k f(i)=x . k=0 i=1 i X X The decoding happens as follows: Let denote the ISI-type n ln + ( (hk−hok)xj−k)2 = of y,xi by h(i),σ(i) for all i ∈ {1,2,...,M}. Using these parameters we define the decoding rule as follows j=1 k=0 X X using Lemma 5 φ(y)=i ⇐⇒ i=argmax I(h(j),σ(j)) n j z x <nγ j i,j−k n in case of non-uniqueness, we declare an error. j=0 X Now we bound the error =kzk+2 ln (hk−hok)nγn+ ln (hk−hok)2 n x2i,j−k+ Pe =Pout+ M1 M PYho|X,σo(φ(y)6=i|xi,ec1) (24) Xk=0 kX=0 Xj=0 Xi=1 n ln whereP denotestheprobabilityofeventE thatthedetected + (h −ho )(h −ho)x x ≥ out m m k k i,j−k i,j−m variance,forsomei∈{1,2,...,M}doesnotsatisfy|σ(i)2− Xj=0kX6=m ky−h(i)∗x k2 i | < γ . Bound the probability of this event. If E ≥kzk−n4lnPnγn+(n−nγn)kh−hok2−n(lnPn)2γn n n occursthen σ(i) is extremalpointof the approximatingset of With this we can bound (20) parameters, so − 1 log PYho|X,σo(y|xi) = |thye−ahpp(ir)ox∗ixmiakt2in>gsnePtno.fSISinIc,ethhis=m(e0a,n0s,t.h.a.t,t0h)eipsoewleemreonftthoef n PYh,|σX(y|xi) ! incomingsequenceisgreaterthannPn,theprobabilityofthis − 21log(σσo22)− 12 + σ2+2k∗hσ−o2hok2 − γ2n −4lnPnγn Pout ≤ 2 ∞ σn(21π)n/2e−2|σz2|2 n = −γ l4P2−(l P )2γ = (21) (cid:18) ZPnln (cid:19) n n n n n P l while kzk=nσ2. If n large enough, then =(2erfc( n n))n <<e−n1+1/8 σ δ max(4l P γ ,4γ l P2,(l P )2γ )< n n n n n n n n n 6 where erfc(·) the complement normal error function. this probability converges to 0 faster that exponential, which - since lim P2l2γ =0. Using (6) we continue from (21) n→∞ n n n as we will see - means that in (24) the second term is the γ =d((h,σ)k(ho,σo))− n dominant. 2 Consider the second term from (24). If we sent i then −(4l H γ +γ l4H2+(l H )2γ )≥ n n n n n n n n φ(y)6=i occurs if and only if ≥d((h,σ)k(ho,σo))−δ/2 I(h(j),σ(j))≥I(h(i),σ(i)) Substituting this and (19) to (18) gives the desired result. Now we can state, and prove our main theorem We know that Theorem 1 For arbitrarily given R > 0 ε > 0, and y ∈T (x )∩T (x ) blocklength n > n (R,ǫ), there exist a code (f,φ) (cod- Ph(i),σ(i) i Ph(j),σ(j) j 0 Y|X Y|X ing/decoding function pair), with rate ≥ R−ε such that for while we supposed that we are not in the event E. all ISI channels, with parameters ho ∈ Rln,|hoi| < Pn,σo < So the probability of the second term of (24) P ,σo 6=0, the average error probability satisfies n Pe(ho,σo,f,φ)≤2−n(Er(R,ho,σo)−ε) PYho|X,σo(φ(y)6=i|xi,ec1)= Here Pho,σo(y|x )dy Y|X i Emhr,iσ(nR{,dh((oh,,σσo))k⊜ho,σo))+|I(h,σ)−R|+} ((2232)) (hˆ,σˆ),(h,Iσ()hˆ∈,σ(ˆXH)≥nI,(Shn,σ),)(Hn,Sn){TTPPYhYh(|(j|XiX)),,σZσ((ji))((xxji))}∩ 6 With Lemma 6 (substituting (h,σ) = (h(j),σ(j)), (hˆ,σˆ) = that the error exponentgiven by Gallager for the i-th channel (h(i),σ(i)), δ =ε/3) and from Lemma 4 we get is Pe ≤ 2−nmh,iσn[d((h(i),σ(i))k(ho,σo))+|I(h(j),σ(j))−R|+−2ε/3] Er(ρ,λi)=−n1′ ln q(x)p(y|x,λi)1/(1+ρ)dx 1+ρ (hˆ,σˆ),(h,σX)∈(Hn,Vn)2 "Zy(cid:18)Zx (cid:19) # I(h(j),σ(j))≥I(h(i),σ(i)) If the input distribution q(x) is the optimal, Gaussian distri- if n is large enough. From this - since the number of the bution, with use of [8] the above expression can be rewritten approximating channel parameters grows subexponentially - as we get n′ −ρ 1 λ i E (ρ,λ )=− ln 1+ Pe ≤2−nminh,σ[d(σokσ)+|I(h,σ)−R|+−ε] r i n′ i=1 (cid:18) σ(1+ρ)(cid:19) X Now we can use the Szego˝ theoremfrom[1], and we getthat the average of the exponent, so the exponentof the system is IV. NUMERICAL RESULT 1 2π f(x) −ρ E (ρ)= −ln 1+ dx We compare the new error exponent with Gallager’s error r 2π σ(1+ρ) Z "(cid:18) (cid:19) # exponent. Gallager derived the method to send digital infor- 0 mation through channel with continuous time and alphabet, where f(x)= ∞ ( l−|k|h h )eikx, which is same k=−∞ j=0 j j+|k| withgivenchannelfunction.Thisresultcanbeeasilymodified as [10], [8] to discrete time, as in e.g. [10], [8]. The Linear Gaussian In the simulaPtionwe sPimulateda 4 dimensionalfadingvec- channel with discrete time parameter, with fading vector h= tor whose components was randomly generated with uniform (h0,h1,...,hl)can be formulated as follows: random distribution in [0,1]. For other randomly generated vectors, we get similar result. The two error exponent were y =Hx+z (25) positive in the same region, but for surprise the new error exponent was better (higher) than Gallager’s one. where x is the input vector and Figure 1 shows, that the new error exponent is always as h0 0 0 ... 0 good, or better than the Gallager’s one. h h 0 ... 0 1 0  ... ... ... ... ...    H =hl hl−1 hl−2 ... 0 (26)   0 hl hl−1 ... 0    .. .. .. .. ..   . . . . .    0 0 0 ... h   l   From[6]wegettheideatodefinearightandalefteigenbasis 2200 forthematrixH.Sotherighteigenbasisr ,r ,...,r ,isnot 1 2 n elsethentheeigenbasisofHTH whereT meansthetranspose. 00..1155 1100 Hr SSNNRR And l ,l ,...,l is the left eigenbasis, where l = i (or 00..11 1 2 m i |Hr | DDiiffffeerreennccee ((ddBB)) basis of HHT), and denoting λ =|Hr |. As in the woirk of 00..0055 00 i i 00 Geiaglelnagbearsirsiarnjd=lδli,j==rli∗iHljT,Hbericau=serrri-s. formanorthonormal 5 4 3 R 2 1 0--1100 i j |r HTHr | i j i i Sowritexinagoodbasisr ,...,r wegetx= n x˜ r 1 n i=1 1 i - we know that x˜ -s form also an i.i.d. gaussian sequence. i Write the output as y = n y˜l , we get P i=1 i i Fig.1. DifferenceofthenewerrorexponentandtheGallager’serrorexponent y˜P=λ x˜ +z˜ (27) i i i i for all i, where z˜ is the white Gaussian noise in the basis Thenewmethodgivesbettererrorexponent,howeveritcan i of l ,...,l (in which is also white). In many works this be hardly computed.We couldestimate the differenceonlyin 1 m equationis used as a channelwith fading, where λ s are i.i.d. 4 dimension, because of the computational hardness to give i random variables. This is a false approach. If the receiver the global optimum of a 4 dimensional function. knows the ISI h then these constant can be computed. Is the ISI isa randomvectorfrom,e.g.,i.i.d.randomvariables,then V. DISCUSSION λ s are not necessarily i.i.d. Firstlyourresultcanbeusedasanewlowerboundtoerror i If we interlace our codeword, with this formula we can get exponentswith no CSI at the transmitter. Note that, our error n′ =n+l parallel channels, each have SNR λ /σ. We know exponent(23)ispositiveiftherateissmallerthanthecapacity. i 7 Secondlyitgivesa newidea fordecodingincomingsignals [2] J.K. Omura and B.K. Levitt, "Coded error probability evaluation for withoutanyCSI:Maybeitisworthtoperformamoredifficult antijam communication system"IEEETrans. Commun., vol. COM-30, pp.869-903,May1982. maximisation, but not dealing with channel estimation. This [3] A. Lapidoth and S. Shamai, "A lower bound on the bit-error-rate re- can be done because of the universality of the code, which sultingfrommismatchedViterbidecoding"Europ.Trans.Telecommun., means, the detection method doesn’t depend on the channel. ?? [4] N. Merhav, G.Kaplan, A. Lapidoth and S. Shamai, "On information We have proved that if the ISI fullfills some criteria (see rates on mismatched decoders" IEEETrans, Inform. Theory, , vol. 40, Theorem1), thenthe messagecan be detected,with exponen- pp.1953-1967, Nov.1994 tially small error probability. However these criteria can be [5] I. Csiszár J.Körner, Information Theory Coding Theorems for Dicrete Memoryless Systems Akadémiaikiadó, 1986. relaxed,becauseintheTheoremP →∞andl →∞,soany n n [6] R.Gallager, Information Theory,andReliableCommunication 1969 ISI with finite length and finite energy, and finite white noise [7] J. Pokorny, H. M. Wallmeier, “Random coding bound and codes variance, can be approximatedwell via the approximatingset produced by permutations for the multiple -access channel,” IEEE Tranactions oninformation theory, vol.IT-31,Nov.1985. of parameters. [8] G.Kaplan andS.Shamai, "Achievable performance overthecorrelated It can be easily seen, that the lemmas and theorem remain Richianchannel,"IEEETrans.Commun.,vol.42,pp.2967-2978,Nov. truewithsmallchanges,iftheinputdistributionisanarbitrar- 1994 [9] A. Lapidoth and J. Ziv “On th universality of the LZ-based decoding ily chosen i.i.d. (absolute continuous or discrete) distribution. algorithm,”IEEETranactionsoninformationtheory,vol.44,no.5,Sep. Only the functional form of the mutual information I(·), and 1998 the entropy of the output variable H (Y) changes. So, this [10] W. Ahmed, P. McLane “Random Coding Error Exponents for Two- (·) DimensionalFlatFadingChannels withCompleteChannelStateInfor- result can be used for lower bounding the error exponent, if mation” IEEETranactions oninformation theory, vol. 45. no. 4, May. non-gaussian i.i.d. random variables are used for the random 1999 selectionofthecodebook.Howeverinthesecasestheentropy of the output can hardly be expressed in closed form. With the result of Theorem 1, we can define channel capacity for compoundfading channels. If the fading remains unchanged during the transmission, and the fading length satisfies l <<n, we can state the following theorem: n Theorem 2 Let F be an arbitrarily given not necessarily finite set of channel parameters, then the capacity of the ISI channel without any CSI, with channel parameter from F, is C(F)= inf I(h,σ) (h,σ)∈F Proof: In the limit of the set H is dense in the n space of the real sequences with any length, so for every (h,σ) ∈ F there exists a sequence (h ,σ ) ∈ H V such n n n n that (h ,σ ) → (h,σ). We know from remark 1 that the n n error exponent in Theorem 1 is a continuous function, so the Theorem 1 proves that C(F) is an achievable rate. GivenlineargaussianchannelwithISIhandvarianceσ the capacity is I(h,σ) if the transmitter has no CSI. Forsome(h,σ)ourerrorexponentgivesabetternumerical result, than the random coding error exponent derived by Gallager[6],improvedbyKaplanandShamai[8](therandom coding error exponent used here is deduced in Section IV). This result is not so surprising, if we know that the Maximum Mutual Information (MMI) decoder gives better exponentinsomecases(likeinmultiaccessenvironment)than the random coding error exponent derived by Gallager. Thisworkdoesn’tcontradictwith [9]. We know,thatin the discretecasetheMMIdecoderisnotelsethanthegeneralised likelihood (GML) decoder [9], and also in [9] was showed, thatGMLdecoderisnotoptimalinthenon-memorylesscase. Howeverthisisnotthecasein thecontinuouscase,wherethe entropyof the incomingsignaldependsofthe usedparameter (h,σ). REFERENCES [1] G.Szego˝ ”Beitrage zurTheorie derToplitzschen FormenI“Mathema- tischeZeitschrift vol.6,pp1671920.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.