ebook img

Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels PDF

0.13 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels

Delayed Feedback Capacity of Stationary Sources over Linear Gaussian Noise Channels Shaohua Yang Aleksandar Kavcˇic´ Marvell Semiconductor Inc University of Hawaii Santa Clara, CA 95054 Honolulu, HI 96822 Abstract—We consider a linear Gaussian noise channel used derivedstate-space channel.By generalizingthe methodology 7 0 with delayed feedback. The channel noise is assumed to be and results derived in [9], [8], we show that a ARMA (autoregressive and/or moving average) process. We 0 1) a feedback-dependent Gauss-Markov source is optimal reformulate the Gaussian noise channel into an intersymbol 2 for achieving the delayed-feedback capacity, and the interference channel with white noise, and show that the n delayed-feedback of the original channel is equivalent to the necessary Markov memory length equals the larger of a instantaneous-feedback of the derived channel. By generalizing a) themovingaverage(MA)noisespectralorder,and J results previously developed for Gaussian channels with instan- b) the sum of the feedback delay and the autoregres- 6 taneous feedback and applying them to the derived intersymbol sive (AR) noise spectral order; 1 interference channel, we show that conditioned on the delayed feedback, a conditional Gauss-Markov source achieves the feed- 2) a state estimator (Kalman-Bucy filter) for the derived ] back capacity and its Markov memory length is determined by state-space channel model is optimal for processing T thenoisespectralorderandthefeedbackdelay.AKalman-Bucy the (delayed) feedback information, and the solution of I filter is shown to be optimal for processing the feedback. The . its steady-state Riccati equation delivers the maximal s maximal information rate for stationary sources is derived in c terms of channel input power constraint and the steady state information rate for stationary sources. [ solution of the Riccati equation of the Kalman-Bucy filter used Notation: Random variables are denoted by upper-case 1 in the feedback loop. letters,e.g.,Xt,andtheirrealizationsare denotedusinglower v I. INTRODUCTION case letters, e.g., xt. A sequence xi,xi+1,...,xj is shortly 0 denoted by xj. The letter E stands for the expectation. The 0 For Gaussian noise channels used with feedback, the chan- i differential entropy of a random variable X is denoted by 1 nel capacity has been characterized in various aspects. For h(X). Bold uppercase letters stand for matrices (e.g., K), 1 memorylesschannels,Shannon[1]showedthatfeedbackdoes 0 while underlined letters stand for column vectors (e.g., c). not increase the capacity, and Schalkwijk and Kalaith [2] 7 proposed a capacity achieving feedback code. For channels II. CHANNEL MODELREFORMULATION 0 s/ with memory, bounds have been developed for the feedback LetXt bechannelinputattimet.LetRt bechanneloutput c capacity [3], [4], [5], [6], [7]. In [8], the optimal feedback at time t. We start by considering a Gaussian noise channel : sourcedistributionisderivedintermsofastate-spacechannel v R =X +N . (1) representationandKalmanfiltering.Themaximalinformation t t t i X rateforstationarysourcesisderivedinananalyticallyexplicit The noise N is assumed to be an autoregressive moving t r form in [9]. For first order moving-average (MA) Gaussian average (ARMA) random Gaussian process with a rational a noisechannels,thefeedbackcapacityisachievedbystationary power spectrum sources as shown in [10]. M M Here we consider a Gaussian noise channel used with de- 1− ame−jmω 1− amejmω layedfeedbackunderanaverage-input-powerconstraint.Com- S (ω)=σ2 (cid:18) m=1 (cid:19)(cid:18) m=1 (cid:19). (2) N W PK PK pared to the instantaneous feedback case, fewer results have 1+ c e−jkω 1+ c ejkω k k beenobtainedonchannelswithdelayedfeedback.Yanagi[11] (cid:18) k=1 (cid:19)(cid:18) k=1 (cid:19) derived an upper bound on the finite block length delayed The coefficients a anPd c are the spectraPl poles and zeros, m k feedbackcapacity.In[12],itwasshownthatdelayedfeedback and M and K indicate the orders of the the moving-average capacity for finite-state machine channels can be determined (MA) and autoregressive (AR) noise power spectral compo- based on a method developed for instantaneous feedback by nents, respectively. Since the poles and zeros of (2) appear in augmenting the channel state to account for feedback delay. pairs symmetric with respect to the unit circle [13], without We first re-formulate the Gaussian noise channel with lossofgenerality,wemayassumethat|a |<1and|c |<1. m k delayedfeedbackintoanequivalentstate-spacechannelmodel Hence, the filter defined by with instantaneous feedback and white noise. The delayed- M K feedbackinformationrateoftheoriginalGaussiannoisechan- H(z)= 1− a z−m 1+ c z−k (3) m k nel equals the instantaneous-feedback information rate of the !, ! m=1 k=1 X X and its inverse are both causal, stable and invertible. (noiseless feedback) z−1 We make the following assumptions on the channel usage: 1) lTimhe poweEr of tnhe cXh2ann/enl=inpPu.t process is constrained1 source xt + (noisy forward channel) n→∞ t=1 t z−1 2) Let ν > 1 be the feedback delay. The prior channel outputs Rt(cid:2)−Pν are kn(cid:3)own to the transmitter (via the st(1 ) feedback lo−o∞p) before the transmission of X . + a1 wt t 3) Transmission starts at time t = 1, i.e., Xt = 0 for + a st(v ) + + yt t ≤ 0. Thus, noise history N−ν is known to both the v −∞ transmitter and receiver. z−1 SincethefilterH(z)isinvertible,wemayapplyH−1(z)to + a c + v+1 1 the channeloutput R without changing the channelcapacity. t The equivalent intersymbol interference (ISI) channel has Xt ast(M ) c M K as the channel input, U as the channel output, and white t Gaussian noise V (with power σ2 ). t W Fig. 1. Equivalent state space model for Gaussian channels with delayed U(z)=H−1(z)(X(z)+N(z))=H−1(z)X(z)+V(z). (4) feedback. The original channel outputs Rt can be determined −∞ from Ut using filter H(z). −∞ I) Since X =0 for t≤0, the initial channel state s =0 To simplify notation for deriving the delayed information t 0 is known to both the transmitter and the receiver. △ rate, we change variables Yt = Ut−ν and Wt = Vt−ν, and II) ThesequencesSt andXt determineeachotheruniquely 1 1 further reformulate the ISI channel in terms of Xt and Yt as according to equation (7). Y(z)=z−νU(z)=z−νH−1(z)X(z)+W(z). (5) III) GiventhechannelstateSt−1 =St−1,thechanneloutput Y isstatistically independentofchannelstatesSt−2,S t 0 t The ISI channel (5) with input Xt and output Yt = Ut−ν is and outputs Y1t−1, that is depicted in Fig 1. The channelis completelycharacterizedby wtheetcaapncaosesfufimciee2ntMsa=m,Kck+anνd,νa.nWd idtehnooutteoflossofgenerality, PYt|St0,Y1t−1 yt st0,y1t−1 =PYt|St−1 yt st−1 . (10) (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) T Since the varianc(cid:12)e of the process W is σ2(cid:12) , the condi- t W △ tional differential entropy of the channel output equals c= 0,···,0 ,1,c ,c ,···,c . (6)  1 2 K ν−1 zeros 1   h Y St,Yt−1 =h Y S = log(2πeσ2 ). (11) Thechannelde|pic{tzed}inFigure1hasastate-spacerepresen- t 0 1 t t−1 2 W tation. Let the vectorof values stored in the channelmemory, (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) IV) The in(cid:12)stantaneous feedbac(cid:12)k of Y in the above derived i.e., St =△ [St(1),St(2),...,St(M)]T, be the channel state channel is equivalent to the delatyed feedback Rt−ν in vector. The state space channel equations are theoriginalchannel.Thus,weonlyneedtoconsiderthe following encoder X = X M,Yt−1 , where M is S = AS +bX (7) t 1 t t−1 t the message to transmit. For the source distribution,the Y = cTS +W , (8) (cid:0) (cid:1) t t−1 t channel input Xt is causally dependent on all previous where Wt is white Gaussian noise with variance σW2 . The channel states St0−1 and channel outputs Y1t−1 constant square matrix A and vector b are defined as a1 a2 ... aM−1 aM 1 Pt xtst0−1,y1t−1 =△ PXt|S0t−1,Y1t−1 xts0t−1,y1t−1 ,(12) 1 0 ... 0 0 (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) A=△  0 1 ... 0 0 , b=△  0. . (9) or equ(cid:12)ivalently in terms of the channe(cid:12)l states as  0... 0... ...... 1... 0...   0..  Pt st st0−1,y1t−1 =△ PSt|S0t−1,Y1t−1 st s0t−1,y1t−1 .(13)       (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) From channel assumptions 1)-3), we have the following: We on(cid:12)ly need to consider Gaussian so(cid:12)urces [4]. 1Since it has been shown [14] that the feedback capacity is a concave function of P, it is not necessary to consider the inequality constraint III. INFORMATION RATEAND OPTIMAL SOURCES limn→∞EˆPnt=1Xt2˜/n≤P. 2IfM <K+ν,weletaM+1=0,···,aK+ν =0andthenredefineM We note that in the derived state-space channel model, the to be K+ν. If M > K+ν, we let cK+ν+1 = 0,···,cM−ν = 0 and thenredefineK tobeM−ν. firstchanneloutputthatcarriesnon-zerosignalisY1+ν =U1. The information rate equals covariance matrix K (of dimension M by M) t I(M;Y) =△ lim 1 I(M;Yn|s ) (14) m =E S s ,yt , (25) n→∞n−ν 1 0 t t 0 1 = lim 1I(M;Yn|s ) (15) Kt =E(cid:2)(St(cid:12)(cid:12)−mt)(cid:3)(St−mt)T s0,y1t . (26) n→∞n 1 0 h (cid:12) i 1 We note that the recursion (24) can be im(cid:12)plemented by a = lim [h(Yn|s )−h(Wn)] (16) n→∞n 1 0 1 Kalman-Bucy filter. 1 1 Theorem 2: For the power-constrained linear Gaussian = lim h(Yn|s )− log 2πσ2 , (17) n→∞n 1 0 2 W channel, the delayed-feedback capacity is achieved by a 1 n (cid:0) (cid:1) feedback-dependentGauss-Markov source PGM defined as = lim h Y s ,Yt−1 α n→∞n t 0 1 Xt=1−(cid:2)h(cid:0)Y(cid:12)(cid:12)s ,Yt−1(cid:1),S (18) PαGM =△ Pt st st−1,αt−1(·) ,t=1,2,... , (27) t 0 1 t−1 = nl→im∞n1 n I (cid:0)St−(cid:12)(cid:12)1,Yt s0,Y1t−1 (cid:1)(cid:3). (19) wthheerpeosttheerioMr(cid:8)adrkiso(cid:0)trvibut(cid:12)(cid:12)rtaionnsitifounncptiroonb(cid:1)aobfilittyhededpeerinv(cid:9)desd ocnhlaynnoenl Xt=1(cid:2) (cid:0) (cid:12) (cid:1)(cid:3) state αt(·) instead of all prior channel outputs. (proof in In the following analysis, we note tha(cid:12)t since the initial Appendix) (cid:3) channelstates isknownaccordingtothechannelassumption Theorem 2 suggests that, for the task of constructing the 0 in Section II, for notational simplicity, we will not explicitly nextsignaltobetransmitted,allthe“knowledge”containedin write the dependence on s when obvious. thevectorofpriorchanneloutputsiscapturedbytheposterior 0 We consider all feedback-dependent Gaussian sources de- distribution α (·) of the channel state. t−1 fined in (12) or (13) By Theorem 1 and Theorem 2, we only need to consider a feedback-dependentGauss-Markov source PGM as defined P =△ {P s st−1,yt−1 ,t=1,2,...}, (20) α t t 0 1 in (27). and the channel i(cid:0)npu(cid:12)t is subject(cid:1)to the input power constraint (cid:12) IV. FEEDBACK CAPACITY COMPUTATION n 1 lim E (Xt)2 =P. (21) The delayed feedback capacity thus can be derived in a n→∞ "n # t=1 similar way as in [8], though the results slightly differ due to X The following two theorems can be conveniently general- feedback delay. izedfrom[8]wheretheywereoriginallyderivedforGaussian channels used with instantaneous feedback. A. Source Parameterization Theorem 1 (Gauss-Markov Source are Optimal): For the Without loss of generality, a feedback-dependent Gauss- power-constrained linear Gaussian channel, a feedback- Markov source PGM can be expressed as dependent Gauss-Markov source α PGM =△ P s s ,yt−1 ,t=1,2,... (22) Xt =dTt St−1+etZt+gt, (28) t t t−1 1 achieves the dela(cid:8)yed(cid:0)-fee(cid:12)dback chan(cid:1)nel capacity.(cid:9)(proof in where Zt is a Gaussian random variable with zero-mean and Appendix) (cid:12) (cid:3) unit-variance and is independent of Z1t−1, X1t−1 and Y1t−1, and vector d is of length M. The coefficients d , e and g By Theorem1, withoutloss of optimality,in the sequelwe t t t t arealldependentontheGaussianpdfα (·),oralternatively only consider feedback-dependent Gauss-Markov sources as t−1 on its mean m and covariance matrix K . The set in (22). t−1 t−1 of coefficients {d ,e ,g } completely determine the transi- Definition 1: We use α (·) as shorthand notation for the t t t t tion probabilities of the feedback-dependent Gauss-Markov posterior pdf of the channel state S , that is t source PGM defined in (27). α α (µ)=△ P µ s ,yt , (23) Lemma 1: For the feedback-dependent Gauss-Markov t St|S0,Y1t 0 1 source as parameterized in (28), we have which is Gaussian due to Gaussian(cid:0)ch(cid:12)annel(cid:1)inputs. (cid:3) (cid:12) For a feedback-dependentGauss-Markov source PGM, the h Y s ,yt−1 −1log 2πeσ2 =1log 1+cTKt−1c , (29) functions α (·) can be recursively computed as t 0 1 2 W 2 σ2 t (cid:18) W (cid:19) (cid:0) (cid:12) (cid:1) (cid:0) (cid:1) α µ = αt−1(v)Pt(µ|v,y1t−1)PYt|St−1,St(yt|v,µ)dv . (24) and (cid:12) t Rαt−1(v)Pt(u|v,y1t−1)PYt|St−1,St(yt|v,u)dudv E (X )2 s ,yt−1 = dTm +g 2+dTK d +(e )2, (30) (cid:0) (cid:1) t 0 1 t t−1 t t t−1 t t TheGaussianRRfunctionα (·)iscompletelycharacterizedbythe conditionalmeanmt (vetctorofdimentionM)andconditional wh(cid:2)ere the(cid:12)(cid:12) values o(cid:3)f(cid:16)dt, et, gt dep(cid:17)end on mt−1 and Kt−1. (cid:3) Proof: Thefirstandsecondordermomentsofthechannel where the maximization in (38) is taken under constraints input X and output Y can be computed as t t dTKd+e2 =P (39) 2 E (X )2 s ,yt−1 = dTm +g +dTK d +(e )2(31) QKccTKQT t 0 1 t t−1 t t t−1 t t K=QKQT+bbTe2− . (40) cTKc+σ2 E(cid:2)Yt s0,(cid:12)(cid:12)y1t−1 =c(cid:3)Tm(cid:16)t−1 (cid:17) (32) W E(cid:2) Yt(cid:12)(cid:12)−E Yt(cid:3)s0,y1t−1 2 s0,y1t−1 =cTKt−1c+σW2 .(33) iTshceomnsattrraiixneQd tios dbeefinnoend-nasegQati=△veAde+finbitdeT., and the matrix K(cid:3) Cohn(cid:0)ditioned(cid:2)on(cid:12)(cid:12)s0 and y(cid:3)1t(cid:1)−1(cid:12)(cid:12), the variiable Yt has a Gaussian Proof: By Lemma 3, for any (asymptotically)stationary distribution with variance (33), thus we obtain (29). Gauss-Markov source, the sequences K , d and e converge t t t Lemma 2: The parameters of the optimal feedback- as t → ∞, so (17) and (29) turn into (38) as n → ∞. dependent Gauss-Markov source must satisfy Constraint (40) is the algebraic Riccati equation (36). Con- straint (39) is the input power of the stationary source, and g =−dTm . (34) t t t−1 subsequently utilizing Lemmas 2 and 3. Proof: By Lemma 1 and equation (17), the value of g t In general, the optimization problem in Theorem 3 in- doesnotaffecttheinformationrate,butchoosingg asin(34) t volves O(M2) variables and can be conveniently solved minimizes the average input power for given d and e . t t analytically for small M or numerically for large M. We note that this essentially follows the center of gravity necessary conditionfor optimal sourcesas derivedin [15]. V. CONCLUSION B. Feedback Capacity for Stationary Sources In this paper, we derived the delayed feedback capacity Definition 2 (Stationary sources): A stationary feedback- of power-constrained stationary sources over linear Gaussian dependent(Gauss-Markov)sourceisasourcethatinducessta- channels with ARMA Gaussian noise. We first reformulated tionarychannelinputandoutputprocesses.Anasymptotically the linear Gaussian noise channel into a state-space form that stationary feedback-dependent (Gauss-Markov) source, in its is suitable for manipulatingthe delayed feedback information limit as t → ∞, induces stationary channel input and output rate. Then, we obtained the delayed feedback capacity for processes. (cid:3) stationary sources by generalizing and applying a method Lemma 3: For a stationary (or asymptotically stationary) thatwasoriginallydevelopedforcomputingtheinstantaneous feedback-dependentGauss-Markovsource,thecovariancema- feedback capacity. We showed that a feedback-dependent trix K and source coefficients d and e converge, i.e., Gauss-Markov source achieves the delayed-feedback chan- t t t nel capacity and that the Kalman-Bucy filter is optimal for lim K =K, lim d =d, lim e =e. (35) t t t processing the feedback. The delayed-feedback capacity is t→∞ t→∞ t→∞ expressible as an optimization problem with constraints on Here, thematrix K satisfies the stationaryKalman-Bucyfilter the conditional state covariance matrix of the Kalman-Bucy equation (the algebraic Riccati equation) filter. QKccTKQT K=QKQT+bbTe2− , (36) cTKc+σ2 APPENDIX W A. Sketch of Proof for Theorem 1 where the matrix Q is defined as Q =△ A + b dT. The instantaneous channel input power converges as Let P1 be any valid feedback-dependent Gaussian source distribution (not necessarily Markov) defined as lim E (X )2 s ,yt−1 =dTK d+(e)2. (37) t 0 1 t−1 t→∞ P =△ P s st−1,yt−1 ,t=1,2,··· . (41) (cid:2) (cid:12) (cid:3) (cid:3) 1 t t 0 1 (cid:12) Proof: Since the (asymptotically) stationary source in- From P , we(cid:8)con(cid:0)stru(cid:12)ct a Marko(cid:1)v (not necessa(cid:9)rily stationary) 1 (cid:12) duces, in its limit as t → ∞, stationary channel input and source distribution P as 2 output processes, by definition the Kalman-Bucy filter has a steady state, and thus the sequences Kt, dt and et converge. P2 = Qt st st−1,y1t−1 ,t=1,2,··· . (42) The Riccati equation (36) is obtained as the stationary form where the fu(cid:8)nctio(cid:0)ns(cid:12)Q s s (cid:1),yt−1 are d(cid:9)efined as the of the covariance matrix of the Kalman-Bucy filter. The limit (cid:12) t t t−1 1 conditional marginal pdf’s computed from P in (37) follows (30) and (35). (cid:0) (cid:12) (cid:1) 1 (cid:12) Theorem 3 (Feedback capacity for stationary sources): Q s s ,yt−1 =△ P(P1) s s ,yt−1 .(43) For a power constrained Gaussian channel used with ν-time t t t−1 1 St|St−1,Y1t−1 t t−1 1 delayedfeedback,the maximalinformationrate for stationary (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) We next(cid:12)show by induction that the the so(cid:12)urces P1 and P2 sources equals induce the same distribution of St and Yt, i.e., t−1 1 1 cTKc Cνfb =mda,ex2log(cid:18)1+ σW2 (cid:19) (38) PS(Ptt−11),Y1t|S0 stt−1,y1t|s0 = PS(Ptt−21),Y1t|S0 stt−1,y1t|s0 . (44) (cid:0) (cid:1) (cid:0) (cid:1) For t=1, by the definition of source P we have conditionedony˜t−1.Ifweletthesetwodistributionsbeequal 2 1 to each other for τ ≥t, that is, if P(P2) (s ,y |s ) = Q (s |s )P y s1 (45) S1,Y1|S0 1 1 0 1 1 0 Y1|S10 1 0 P s s ,y˜t−1,yτ−1 ,τ ≥t = P (s |s )P (cid:0)y (cid:12)s1 (cid:1) (46) τ τ τ−1 1 t 1 1 0 Y1|S10 1 (cid:12) 0 (cid:8) (cid:0) (cid:12) = Pτ sτ s(cid:1)τ−1,y1t−(cid:9)1,ytτ−1 ,τ ≥t , (56) = P(P1) (s ,y |(cid:0)s )(cid:12). (cid:1) (47) (cid:12) S1,Y1|S0 1 1 0 (cid:12) then we have for an(cid:8)y k(cid:0)≥t(cid:12)(cid:12) (cid:1) (cid:9) Since s0 is known, this directly implies PYtk,Skt−1|S0,Y1t−1 ytk,skt−1 s0,y˜1t−1 P(P2) s1,y |s =P(P1) s1,y |s . (48) (cid:0) k (cid:12) (cid:1) Now, aSs10s,Yu1m|Se0th(cid:0)at0the1eq0u(cid:1)ality (S4104,Y)1h|So0ld(cid:0)sf0or u1p t0o(cid:1)time t−1, = αt−1(st−1)τY=tPτ(cid:0)(cid:12)sτ(cid:12)sτ−1,y1τ−1(cid:1)PYτ|Sττ−1(cid:0)yτ(cid:12)sττ−1(cid:1) where t>1, particularly, = PYtk,Skt−1|S0,Y1t−1 (cid:12)ytk,skt−1 s0,y1t−1 . (cid:12) (57) P(P2) st−1,yt−1|s =P(P1) st−1,yt−1|s (49) This shows that for any k ≥t(cid:0)the entrop(cid:12)(cid:12)ies are e(cid:1)qual Stt−−12,Y1t−1|S0 t−2 1 0 Stt−−12,Y1t−1|S0 t−2 1 0 h Yk s ,y˜t−1 =h Yk s ,yt−1 , (58) t−1 (cid:0) (cid:1) (cid:0) (cid:1) t 0 1 t 0 1 = P s sτ−1,yτ−1 P y sτ dst−3.(50) τ τ 0 1 Yτ|Sττ−1 τ τ−1 1 and for any(cid:0)τ ≥(cid:12)(cid:12)t the pow(cid:1)ers a(cid:0)re eq(cid:12)(cid:12)ual (cid:1) Zτ=1 The inYductio(cid:0)n ste(cid:12)(cid:12)p for time t(cid:1)is simply(cid:0)sho(cid:12)(cid:12)wn as(cid:1)follows E (Xτ)2 s0,y˜1t−1 =E (Xτ)2 s0,y1t−1 . (59) P(P2) st ,yt|s Therefor(cid:2)e,theop(cid:12)(cid:12)timalsou(cid:3)rcedi(cid:2)stributio(cid:12)(cid:12)nfortim(cid:3)eτ ≥twhen Stt−1,Y1t|S0 t−1 1 0 y1t−1 is the feedback vector, must also be optimal when y˜1t−1 = Q s s(cid:0) ,yt−1 ×(cid:1) P y st × isthefeedbackvector,andviceversa.Sincetimetisarbitrary, t t t−1 1 Yt|Stt−1 t t−1 we conclude that, for any t>0, the functionα (·) extracts t−1 (cid:0) (cid:12)(cid:12) Z PS(Ptt−−(cid:1)212),Y1t−1|S0 stt(cid:0)−−12,(cid:12)(cid:12)y1t−1|(cid:1)s0 dst−2 (51) fsrooumrcey1td−is1triablluttihoantfiusncnteiocensssaPrty sfotrsfto−r1m,yu1tla−t1ing. the optimal (=a) Z"τtQ−=11Pτ(sτ|sτ0−1,y1τ−1)fYτ|Sττ−(1yτ(cid:0)|sττ−1)#Pt(st|st0−1,y(cid:1)1t−1)dst1−2 × REFERE(cid:0)NCE(cid:12)(cid:12)S (cid:1) "τt−=11Pτ(sτ|sτ0−1,y1τ−1)fYτ|Sττ−(1yτ|sττ−1)#dst1−2 [1] oCn.EIn.fSohrman.nTohne,o“rTyh,evozle.roITe-r2r,oprpc.ap1a1c2i–ty12o4f,aSneopits.y19ch5a6n.nel,”IRETrans. Z Q [2] J. Schalkwijk and T. Kailath, “A coding scheme for additive noise PYt|Stt−1yt stt−1× "τt−=11Pτ(sτ|sτ0−1,y1τ−1)fYτ|Sττ−(1yτ|sττ−1)#ds1t−2(52) cohnaInnnfeolrsmwaittihonfeTedhbeoacryk,–Iv:oNl.o12b,anpdpw.i1d7t2h–c1o8n2s,trAaipnrt.,”1I9E6E6.ETransactions (cid:0) (cid:12) (cid:1) Z Q [3] S.Butman, “Ageneral formulation oflinear feedback communications (cid:12) systemswithsolutions,”IEEETrans.Inform.Theory,vol.15,pp.392– t 400,May1969. (=b) P s sτ−1,yτ−1 P y sτ dst−2(53) [4] T. M. Cover and S. Pombra, “Gaussian feedback capacity,” IEEE τ τ 0 1 Yτ|Sττ−1 τ τ−1 1 Trans.Inform.Theory,vol.35,pp.37–43,Jan1989. Zτ=1 Y (cid:0) (cid:12) (cid:1) (cid:0) (cid:12) (cid:1) [5] M.Pinsker,“TalkdeliveredattheSovietInformationTheoryMeeting.” =P(P1) (cid:12)st ,yt|s , (cid:12) (54) unpublished, 1969. Stt−1,Y1t|S0 t−1 1 0 [6] P. Ebert, “The capacity of the Gaussian channel with feedback,” Bell (cid:0) (cid:1) Syst.Tech.J.,vol.49,pp.1705–1712, Oct1970. where (a) is the result of expandingthe definition in (43) for [7] L.H.Ozarow,“UpperboundsonthecapacityofGaussianchannelswith source P and the induction assumption (50) using the Bayes feedback,”IEEETrans.Inform.Theory,vol.36,pp.156–161,Jan1990. 2 rule and substituting them into (51), and (b) is obtained by [8] S. Yang, A. Kavcˇic´, and S. Tatikonda, “On the feedback capacity of power constrained Gaussian noise channels with memory.” Accepted simplifying the expression in (52). forpublication byIEEETransactions onInformationTheory. Thus, we have shown that the channel states St and [9] S.Yang,A.Kavcˇic´,andS.Tatikonda,“Feedbackcapacityofstationary t−1 outputs Yt induced by sources P and P have the same sourcesoverGaussianintersymbolinterferencechannels,”inProc.IEEE 1 1 2 GLOBECOM2006,(SanFrancisco, USA),Nov2006. distribution. It is therefore clear that the non-Markov source [10] Y.-H.Kim,“Feedback capacity ofthefirst-ordermovingaveragegaus- P and Markov source P induce the same information rate sianchannel,” inProc.IEEEISIT,(Adelaide, Australia). 1 2 according to equality (19). [11] K.Yanagi,“Onthecapacityofthediscrete-timeGaussianchannelwith delayed feedback,” IEEETransactions onInformation Theory, vol. 41, pp.1051–1059, Jul.1995. B. Sketch of Proof for Theorem 2 [12] S.Yang,A.Kavcˇic´,andS.Tatikonda,“Thefeedbackcapacityoffinite- Suppose that two differentfeedback vectors y˜t−1 and yt−1 state machine channels,” IEEE Transactions on Information Theory, 1 1 vol.51,pp.799–810,Mar.2005. (y˜1t−1 6= y1t−1) induce the same posterior channel state pdf [13] A.V.Oppenheim andR.W.Schafer, Discrete-Time SignalProcessing. α (·), i.e., for any possible state value s =µ we have Englewoods Cliffs, NJ:Prentice Hall,1989. t−1 t−1 [14] K. Yanagi, H. W. Chen, and J. W. Yu, “Operator inequality and its PSt−1|S0,Y1t−1 µ s0,y˜1t−1 =PSt−1|S0,Y1t−1 µ s0,y1t−1 . (55) aMpaptlhiceamtiaotnicst,ovocla.p4a,ciStyepot.f2G00a0u.ssian channel,” Taiwanese Journal of [15] J.Schalkwijk,“Center-of-gravityinformationfeedback,”IEEETransac- Now consider(cid:0)tw(cid:12)(cid:12)o distrib(cid:1)utions for the so(cid:0)urc(cid:12)(cid:12)e Sτ, fo(cid:1)r τ ≥ tionsonInformation Theory,vol.14,pp.324–331,Mar.1968. t, the first distribution conditioned on yt−1, and the second 1

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.