ebook img

On Using Feedback in a Gaussian Channel PDF

0.22 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On Using Feedback in a Gaussian Channel

Problems of Information Transmission, vol. 50, no. 3, pp. 19–34, 2014. M. V. Burnashev1, H. Yamamoto2 ON USING FEEDBACK IN A GAUSSIAN CHANNEL For information transmission a discrete time channel with independent additive Gaussian noise is used. There is also another channel with independent 5 1 additive Gaussian noise (the feedback channel), and the transmitter observes 0 without delay all outputs of the forward channel via that channel. Transmission 2 of nonexponential number of messages is considered (i.e. transmission rate equals n a zero) and the achievable decoding error exponent for such a combination of J channels is investigated. The transmission method strengthens the method used 0 by authors earlier for BSC and Gaussian channels. In particular, for small 2 feedback noise, it allows to gain 33.3% (instead of 23.6% earlier in the similar ] T case of Gaussian channel). I . s c [ §1. Introduction and main result 1 v In the paper results of [1] are strengthened and proofs are simplified. We consider the 7 8 discrete time channel with independent additive Gaussian noise, i.e. if x = (x ,...,x ) is 1 n 8 the input codeword then the received block y = (y ,...,y ) is 4 1 n 0 . y = x +ξ , i = 1,...,n, (1) 1 i i i 0 5 where ξ = (ξ ,...,ξ ) are independent (0,1)–Gaussian random variables, i.e. Eξ = 1 1 n N i : 0, Eξ2 = 1. There is the noisy feedback channel, and the transmitter observes (without v i delay) all outputs z of the forward channel via that noisy feedback channel i i X { } r z = y +ση , i = 1,...,n, (2) a i i i where η = (η ,...,η ) are independent (and independent of ξ) (0,1)–Gaussian random 1 n N variables, i.e. Eη = 0, Eη2 = 1. The value σ > 0, characterizing feedback channel noise i i intensity, is given. No coding is used in the feedback channel (i.e. the receiver simply re- transmits all received outputs to the transmitter). In other words, the feedback channel is “passive”. 1Supported in part by the Russian Foundation for Basic Research, project nos. 12-01-00905aand 13-01- 12458 ofi_m2. 2 Supported in part by the Japanese Fund of JSPS KAKENHI, grant no. 25289111. 1 We assume that the input block x satisfies the constraint n x2 nA, (3) i ≤ i=1 X where A is a given constant. We denote by AWGN(A) the channel (1) with constraint (3) without feedback, and by AWGN(A,σ) that channel with noisy feedback (2). The capacity of both channels equals C(A) = [ln(1+A)]/2. We consider the case when the overall transmission time n and M = eo(n), n , → ∞ equiprobable messages θ ,...,θ are given. After the moment n, the receiver makes a 1 M ˆ { } decision θ on the message transmitted. We are interested in the best possible decoding error exponent (and whether it exceeds the similar exponent of the channel without feedback). It is well known [2] that even noiseless feedback does not increase the capacity of the Gaussian channel (or any other memoryless channel). However, feedback allows to improve the decoding error exponent (channel reliability function) with respect to no-feedback channel. Possibility of such improvement stimulated a good interest to that topic in 60–80’s. A good number of interesting results have been obtained during that period (e.g. [3–10]). Unfortunately, all those papers had a common drawback: their methods were heavily based on the assumption that the feedback is noiseless. It was necessary in order to have perfect mutual coordination between both the transmitter and the receiver. Essentially, any noise in the feedback link destroyed that coordination and all hypothetical improvements. It was not clear whether it is possible to improve communication characteristics using more realistic noisy feedback. That uncertainty with noisy feedback remained till 2008, when in [11]–[14] it was shown (for BSC) how to use such feedback in order to improve the decoding error exponent. Although improvement was not large (approximately 14.3%. for small feedback noise), it was the first method that worked for noisy feedback. Later results ([1] and this paper) are developments of [11]–[14]. In order to explain what is new in the paper, remind briefly what was done in earlier papers [11]–[14] and [1]. For that purpose we explain first why noiseless feedback allows to improve decoding error exponent. For a channel without feedback that exponent is determined (for small transmission rates R) by the code distance of the code used (i.e. by the minimaldistanceamongcodewords).Noiseless feedbackallowsduringtransmissiontochange the code (code function) used, e.g. increasing the distances among most probable codewords. That feature allowed to improve the decoding error exponent. But for that purpose an ideal coordination between both the transmitter and the receiver are required. In all papers [11]–[14], [1] and this one coding function can be changed only at one fixed moment (“switching moment”). In [11]–[14] such change took place only if two most probable codewords were much more probable than all remaining codewords. It was shown that if noise in the feedback channel is less than a certain critical value p , then it is possible to crit choose transmission parameters such that the probability of miscoordination between the transmitter and the receiver becomes smaller than decoding error probability. That fact allowed to improve the decoding error exponent with respect to no-feedback channel. 2 Later in the paper [1] for Gaussian channel that method was strengthened taking into account not two, but three most probable codewords. Moreover, the decoding method was improved. It allowed not only to improve the gain (23.6% instead of 14.3% in [12], but also to show that for any noise intensity σ2 < it is possible to improve the best error exponent ∞ of AWGN(A) no-feedback channel. Of course, if σ is not small then the gain is small, but it is strongly positive. In other words, in the problem considered there is no any critical level σ , beyond which it is not possible to improve the error exponent of the no-feedback crit channel. It should be noticed also that the investigation method with optimal decoding in [1] was rather tedious. The method of papers [11, 12] was applied to Gaussian channel AWGN(A,σ) in [15] with similar to [11, 12] results (in particular, with the same asymptotic gain 14.3%). The aim of the paper is to strengthen the transmission method [1] (in particular, using up to four most probable codewords) and also simplify its analysis. It allows to improve the gain up to 33.3% (instead of 23.6% in [1]). Remark 1. We consider the case when the value σ2 > 0 is fixed and does not depend on the number of messages M. For x,y Rn denote ∈ n (x,y) = x y , x 2 = (x,x), d(x,y) = x y 2. i i k k k − k i=1 X A subset = x ,...,x with x 2 = An, i = 1,...,M is called a (M,A,n)–code of 1 M i C { } k k length n. For a code = x denote by P ( ) the minimal possible decoding error probability i e C { } C P ( ) = minmaxP(e x ), e i C i | where P(e x ) – conditional decoding error probability provided x was transmitted, and i i | minimum is taken over all decoding methods (it will be convenient to denote the message transmitted as θ and x as well). i i In the paper we consider the case when M = M , but M = eo(n) as n (it n n → ∞ → ∞ corresponds to zero-rate of transmission). For M messages and AWGN(A) channel denote by P (M,A,n) the minimal possible decoding error probability for the best (M,A,n)–code e and introduce the exponent (in n) of that function [16] 1 1 A E(A) = limsup ln = . n n Pe(M,A,n) 4 (4) M→∞ lnM→=o∞(n) Similarly, for AWGN(A,σ) channel with noisy feedback denote by P (M,A,σ,n) the e minimal possible decoding error probability and introduce the function 1 1 F(A,σ) = limsup ln . n n Pe(M,A,σ,n) M→∞ lnM→=o∞(n) 3 It is also known that if σ = 0 (i.e. noiseless feedback) then [7] A F(A,0) = . (5) 2 ForAWGN(A,σ) channel denoteby F (A,σ) thebest error exponent forthetransmission 1 method with one switching moment, described in §2. Then F (A,σ) F(A,σ) for all A,σ. 1 ≤ The paper main result is as follows. T h e o r e m. Let M and lnM = o(n), n . Then the formula holds → ∞ → ∞ A(1 σ2) F (A,σ) − . (6) 1 ≥ 3 For small σ the formula (6) gives 33.3% of improvement with respect to no-feedback channel (see (4)). It is given in a simplified form oriented to small values of σ. A more general formula (following from results of §4) would be too bulky. Remark 2. The method described in the paper and its analysis can be generalized on slow growing number N = N(σ) of switches. It allows to prove the following result A(1+o(σ)) F (A,σ) = , σ 0. (7) N(σ) 2 → In other words, for small σ the formula (7) gives improvement of 100% with respect to no- feedback channel (see (4)), and it coincides with similar result (5) for noiseless feedback. It will be done in another paper. In§2thetransmissionmethodwithoneswitching moment anditsdecodingaredescribed. In §§3-4 its analysis is performed and the theorem is proved. Greek letters ξ,η,ζ,ξ ,... in 1 the paper designate (0,1)–Gaussian random variables. N §2. Transmission/decoding method We use the transmission strategy with one fixed switching moment at which the code used will be changed. Denote n = n/2 and partition the total transmission time [1,n] on 1 two phases: [1,n ] (phase I) and [n +1,n] (phase II). After moment n the receiver makes a 1 1 decision in favor of the most probable message θ (based on all received on [1,n] signals). i Each of M codewords x of length n have the form x = (x ,x ), where both x (to { i} i ′i ′i′ ′i be used on phase I) and x (to be used on phase II) have length n . ′i′ 1 Similarly, the received block y has the form y = (y ,y ), where y is the block received ′ ′′ ′ on phase I and y is the block received on phase II. Denote by z the received (by the ′′ ′ transmitter) block on phase I. The codewords first parts x are fixed, while the second { ′i} parts x will depend on the block z received by the transmitter on phase I. { ′i′} ′ We set two positive constants A ,A such that 1 2 A +A = nA, (8) 1 2 4 and denote A 2 β = . (9) A 1 Then A = (1+β)A /n. At the end of Theorem proof we set β = 1/2. 1 Denoting d = d(x ,y ) = y x 2, i ′i ′ k ′ − ′ik arrange the distances d ,i = 1,...,M for the receiver after phase I in the increasing order, i { } and denote d(1) = mind d(2) ... d(M) = maxd i i i ≤ ≤ ≤ i (case of tie has zero probability). Let also x(1),...,x(M) be the corresponding ranking of ′ ′ codewords x after phase I for the receiver, i.e x(1) is the closest to y codeword, etc. ′ ′ ′ { } Similarly, denoting d(t) = d(x ,z ) = z x 2, i ′i ′ k ′ − ′ik (t) arrange the distances d , i = 1,...,M for the transmitter after phase I in the increasing { i } order, denoting d(1)t = mind(t) d(2)t ... d(M)t = maxd(t). i i ≤ ≤ ≤ i i Let also x(1)t,...,x(M)t be the corresponding ranking of codewords x after phase I for ′ ′ ′ { } the transmitter, i.e x(1)t is the closest to z codeword, etc. ′ ′ Transmission method with one switching moment. We choose a set of codes K C which the transmitter may use on phase II. A code used on phase II depends on the C ∈ K received block z . Based on y , the receiver finds the probability distribution P ( y ), ′ ′ r ′ C| C ∈ K of the code used by the transmitter on phase II, and uses that distribution for optimal C decoding. It is a crucial point of the whole method. Transmission. In order to simplify exposition it is sufficient to consider the case M ≤ (n+2)/2. Then on both phases we will be able to use orthogonal codes of length n = n/2. 1 ThecaseofarbitraryM,suchthatM = eo(n),n canbeconsidered replacing orthogonal → ∞ codes by “almost” equidistant codes. Then all calculations remain essentially the same (see details in [1]). Phase I. The transmitter uses the orthogonal code of M codewords x of length n { ′i} 1 such that x 2 = A . k ′ik 1 Phase II. We set nonnegative numbers τ and τ . Based on the received block z and 2 3 ′ numbers τ ,τ , the transmitter chooses k = k(z ,τ ,τ ) most probable (for him) messages 2 3 ′ 2 3 k = 2,3,4. Denote that set of messages as k = x(1)t,...,x(k)t , k = 2,3,4. (10) ′ ′ S n o The code length n for phase II we partition on two parts: of length 3 for selected 1 k 2,3,4 messages and of length n 3 for remaining n k messages, respectively. The 1 ∈ { } − − transmitter uses the following code = (z ) with x 2 = A , j = 1,...,M. C′′ C′′ ′ k ′j′k 2 5 1) If d(3)t d(2)t 2A τ , then the transmitter selects two most probable (for him) 1 2 − ≥ messages θ ,θ (i.e. k = 2) and uses for them opposite codewords x = x that have i j ′i′ − ′j′ nonzero coordinates only at time instant n +1. 1 For remaining M 2 messages θ the orthogonal code of M 2 codewords x of − { s} − { ′s′} length n 3 is used. That code have zero components at time instants n +1,n +2,n +3, 1 1 1 1 − and all its codewords x are orthogonal to the first two codewords (x ,x ). { ′s′} ′i′ ′j′ 2) If d(3)t d(2)t < 2A τ , d(4)t d(3)t 2A τ then the transmitter selects three most 1 2 1 3 − − ≥ probable (for him) messages θ ,θ ,θ (i.e. k = 3) and uses for them the 3-simplex code i j m occupying time instants n +1,n +2. 1 1 For remaining M 3 messages θ the orthogonal code of codewords x of length − { s} { ′s′} n 3 is used. That code have zero components at time instants n +1,n +2,n +3, and 1 1 1 1 − all its codewords x are orthogonal to the first three codewords (x ,x ,x ). { ′s′} ′i′ ′j′ ′m′ 3) If d(3)t d(2)t < 2A τ , d(4)t d(3)t < 2A τ , then the transmitter selects four most 1 2 1 3 − − probable (for him) messages θ ,θ ,θ ,θ (i.e. k = 4) and uses for them the 4-simplex code, i j m l occupying time instants n +1,n +2,n +3. 1 1 1 For remaining M 4 messages θ the orthogonal code of codewords x of length − { s} { ′s′} n 3 is used. That code have zero components at time instants n +1,n +2,n +3, and 1 1 1 1 − all its codewords x are orthogonal to the first four codewords (x ,x ,x ,x ). { ′s′} ′i′ ′j′ ′m′ ′l′ This transmission method strengthens the method used in [1], [11]–[14], where only two or three messages were selected. Note also that the set k of selected messages should be such that with high probability S the true message θ k, but the number k is small as possible. true ∈ S Remark 3. Introducing additional parameters τ ,..., it is possible to strengthen the 4 method used, but it gives not a big improvement of the results obtained. Much more improvement can be obtained using an increasing number of N = N(σ) (see remark 2). Decoding. Due to noise in the feedback channel the receiver does not know exactly codewordsx(1)t,x(2)t,...andthereforeitdoesnotknowthecodeusedonphaseII.Butbased ′ ′ onthereceived blocky itmayevaluateprobabilitiesofallpossible codewords x(1)t,x(2)t,... ′ ′ ′ and find the probabilities with which any code was used on phase II. It allows to the ′′ C receiver,basedonthefullreceivedblocky = (y ,y ),tofindposteriorprobabilities p(y x ) ′ ′′ i { | } and make decision in favor of most probable message θ . Such full decoding is described in i details in the next section. §3. Full decoding and error probability P e Since x 2 = A, i = 1,...,M, for the likelihood ratio we have i k k p(y x ) ln | i = (x x ,y). i 1 p(y x ) − 1 | If x is the true codeword then y = x +ξ and ξ = (ξ ,ξ ) = (ξ ,...,ξ ), where all ξ true true ′ ′′ 1 n i { } are independent (0,1)–Gaussian random variables. If x = x , then true 1 N p(y x ) 1 ln | i = (x x ,ξ) x x 2 p(y x ) i − 1 − 2k i − 1k 1 | 6 and p(y x ) ln | 3 = (x x ,ξ)+(x x ,x ), 3 2 3 2 1 p(y x ) − − 2 | where (x,ξ) is (0, x 2)–Gaussian random variable. N k k For decoding error probability P we have e M 1 P P , (11) e ek ≤ M k=1 X where p y θ P = P maxln i 0 θ , k = 1,...,M. (12) ek k ( i6=k p(cid:0)y(cid:12)θk(cid:1) ≥ ) (cid:12) (cid:12) Denote ((x ,x ) = 0, i 2) (cid:0) (cid:12) (cid:1) (cid:12) ′i ′1 ≥ (cid:12) p y θ ′ i X = ln = (x x ,y ) = (x x ,ξ ) A , i p y θ ′i − ′1 ′ ′i − ′1 ′ − 1 (cid:0) ′(cid:12) 1(cid:1) (cid:12) (13) p y y ,θ (cid:0) (cid:12) (cid:1) ′′ ′ i (cid:12) Yi = ln . p y y ,θ (cid:0) ′′(cid:12) ′ 1(cid:1) (cid:12) It is sufficient to investigate the value P , for(cid:0)wh(cid:12)ich w(cid:1)e have from (12)–(13) e1 (cid:12) P = P max(X +Y ) 0 θ P X +Y 0 θ = e1 i i 1 i i 1 i 2 ≥ ≤ ≥ (cid:26) ≥ (cid:12) (cid:27) Xi≥2 (cid:8) (cid:12) (cid:9) (14) = Ey′P X(cid:12) i +Yi 0 y′,θ1 . (cid:12) ≥ i 2 X≥ (cid:8) (cid:12) (cid:9) (cid:12) We can express the value Y via y as follows. Since x = x (z ) and y = x +ξ , then i ′ ′i′ ′i′ ′ ′′ ′1′ ′′ p y y ,θ p y z ,y ,x eYi = ′′ ′ i = Ez′y′ ′′ ′ ′ ′i′ = p(cid:0)y′′(cid:12)y′,θ1(cid:1) | p(cid:0)y′′(cid:12)z′,y′,x′1′(cid:1) (15) = Ez′y′e(y(cid:0)′′,x′i(cid:12)(cid:12)′−x′1′)(cid:1)= Ez′y′e(x′1(cid:0)′,x′i′(cid:12)(cid:12)−x′1′)+(ξ′′,x(cid:1)′i′−x′1′), (cid:12) (cid:12) | | where the second equality is based on the fact that in both nominator and denominator the same code is used. Remark 4. In order to apply the formula (15) it is necessary to know only the difference x x (depending on z ). We do not need to know the whole code used on phase II. k ′i′ − ′1′k ′ The selected group of messages of the code for phase II may consist of 2,3,4 messages. For example,3messages areselected if3most probablemessages areapproximatelyequiprobable (t) and all remaining messages are well separated from them (in metrics d ). i We develop the right-hand side of the formula (15). The difference x x (depending k ′i′− ′1′k on z ) takes on one of 4 possible values (defined by partition groups, which those messages ′ 7 belong to on phase II). It is convenient to separate those cases. Note that for all codewords of the k-simplex code we have d = x x 2 = 2A k/(k 1), i = j. ij k ′i′ − ′j′k 2 − 6 Then denote δ = 2A k/(k 1), k = 2,...,K, k 2 − (16) δ = 2A . 0 2 In other words, δ , k 2 is the codewords distance for k-simplex code, while d is such k 0 ≥ distance for the orthogonal code. If x ,x belong to k-simplex code then ′1′ ′i′ (x ,x x ) = δ /2 = A k/(k 1), k = 2,...,K, ′1′ ′i′ − ′1′ − k − 2 − (x ,x x ) = A = δ /2, k = 0. ′1′ ′i′ − ′1′ − 2 − 0 Thedifference x x maytakeonvalues2A (correspondstok = 0)andδ ,k = 2,3,4. k ′i′− ′1′k 2 k Each value δ , k = 2,3,4 appears for phase II if a group of k messages was selected and both k messages x ,x belong to that group. In all other cases the value δ is used. ′1 ′i 0 Assuming θ = θ , introduce non-overlapping sets of random events true 1 = z : x x 2 = δ , k = 0,2,3,4. (17) Zi,k ′ k ′i′ − ′1′k k (cid:8) (cid:9) 4 Denoting formally = , i 2, we have z = (here means the union of i,1 ′ i,k Z ∅ ≥ { } Z k=0 non-overlapping sets, and {z′} is the set of all possiblePoutputs z′). P We may continue (15) as follows 4 4 eYi = E e(x′1′,x′i′−x′1′)+(ξ′′,x′i′−x′1′); i,k y′ = pke−δk/2+(ξ′′,x′i′−x′1′), Z Xk=0 h (cid:12) i Xk=0 where E[ξ; ] = E(ξ I ), p1 = 0 and (cid:12)(cid:12) A · {A} p = p (y ) = p (ξ ) = P y , k = 0,2,3,4. (18) k k ′ k ′ i,k ′ Z Then using (13) we have (cid:0) (cid:12) (cid:1) (cid:12) 4 eXi+Yi = pke−A1−δk/2+(x′i−x′1,ξ′)+(ξ′′,x′i′−x′1′), k=0 X and therefore (since k takes on one of four possible values) P X +Y 0 θ = EP eXi+Yi 1 y ,θ = i i 1 ′ 1 ≥ ≥ 4 = EP (cid:8) pke−A1−δk(cid:12)(cid:12)/2+(cid:9)(x′i−x′1,ξ(cid:8)′)+(x′i′−x′1′,ξ′(cid:12)(cid:12)′) 1(cid:9)y′,θ1 ≥ ≤ ( ) k=0 X (cid:12) 4 EP pke−A1−δk/2+(x′i−x′1,ξ′)+(x′i′−x′1′,ξ′′) 1/4 (cid:12) i,k y′,θ1 = (19) ≤ ≥ Z Xk=0 nh i\ (cid:12) o 4 (cid:12) = P [(x x ,ξ )+(x x ,ξ )+lnp (ξ ) A +δ /2 ln4] θ , ′i − ′1 ′ ′i′ − ′1′ ′′ k ′ ≥ 1 k − Zi,k 1 Xk=0 n \ (cid:12) o (cid:12) 8 where x x 2 = δ for the set . Denote k ′i′ − ′1′k k Zi,k (x ,ξ ) = A ξ , (x ,η ) = A η , i = 1,...,M, (20) ′i ′ 1 i′ ′i ′ 1 i′ p p where all ξ ,η are independent (0,1)-Gaussian random variables. { i′ i′} N Since (x x ,ξ ) √δ ξ for the set , we get from (19) and (20) ′i′ − ′1′ ′′ ∼ k ′′ Zi,k 4 P X +Y 0 θ eo(1) P , i i 1 ik ≥ ≤ (21) k=0 (cid:8) (cid:12) (cid:9) X P = P A (ξ ξ )+ (cid:12)d ξ +lnp (ξ ) A +δ /2 , ik 1 i′ − 1′ k ′′ k ′ ≥ 1 k np p o where ξ does not depend on ξ and o(1) 0 as A . ′′ ′ 1 → → ∞ Probabilities p (ξ ) and values P from (21) are evaluated in the next section. k ′ ik { } §4. Probabilities p (ξ ) and values P . Proof of Theorem k ′ ik Let ξ be (0,1)–Gaussian random variable. We will regularly use simple inequality N 1 ∞ P(ξ z) = e−u2/2du e−z+2/2, z R1, (22) ≥ √2π ≤ ∈ Z z and its natural generalization L e m m a 1. 1) Let (ξ ,...,ξ ) be independent (0,1)–Gaussian random variables and 1 K RK. Then (x = (x ,...,x ), x 2 = x2 +...+Nx2 ) A ⊆ 1 K k k 1 K 1 P((ξ ,...,ξ ) ) exp inf x 2 . (23) 1 K ∈ A ≤ −2 x k k (cid:26) ∈A (cid:27) 2) Let ξ,η be (0,1)–Gaussian random variables and E(ξη) = ρ. Then: N a) if A Bρ 0 and B Aρ 0 then − ≥ − ≥ A2 +B2 2ABρ P(ξ A,η B) P ξ − ; (24) ≥ ≥ ≤ ≥ s 1 ρ2 ! − b) otherwise P(ξ A,η B) min P(ξ A),P(η B) . (25) ≥ ≥ ≤ { ≥ ≥ } P r o o f. 1) Let inf x = r > 0. Then RK S(r), where S(r) – the ball of radius x k k A ⊆ \ r. Therefore ∈A P((ξ ,...,ξ ) ) P (ξ ,...,ξ ) RK S(r) . 1 K 1 K ∈ A ≤ ∈ \ Evaluating the last probability (using spheric(cid:8)al coordinates) we get th(cid:9)e formula (23). 9 2) We have A+aB P(ξ A,η B) inf P(ξ +aη A+aB) = P ξ . ≥ ≥ ≤ a 0 ≥ ≥ 1+a2 +2aρ! ≥ Minimizing the last expression over a 0, we get the formulas (2p4)–(25). (cid:3) ≥ Inequalities (23)–(25) give the exact logarithmic asymptotics in a natural asymptotic case. In order to apply the formula (21), we consider sequentially the cases k = 2,0,3,4. 1. Case k = 2, δ = 4A . It is the simplest case and it takes place with probability close 2 2 to 1. In that case x ,x compose the group 2 of two selected messages. Neglecting the term ′1′ ′i′ S p we get from (21)–(22) 2 P P (x x ,ξ )+2 A ξ A +2A ln3 = i2 ≤ ′i − ′1 ′ 2 ′′ ≥ 1 2 − n p [A +2A o ln3]2 = P 2A +4A ξ A +2A ln3 exp 1 2 − + (26) 1 2 1 2 ≥ − ≤ − 4(A +2A ) ≤ (cid:26) 1 2 (cid:27) np o √3e (A1+2A2)/4 = √3e A1(1+2β)/4. − − ≤ Cases k = 2 are more computationally involved and in order to investigate them we will 6 need the definition (10). 2. Case k = 0, δ = 2A . It is the most computationally involved case. It takes place 0 2 when the selected group of messages m contains not more than one of messages x ,x . S ′1′ ′i′ Then 4 P = P , (27) i0 i0m m=2 X where P = P k = 0, m , m = 2,3,4. We consider sequentially probabilities P ,m = i0m i0m { S } { 2,3,4 , starting with P . Denote i02 } d = x x 2. ′ij k ′i − ′jk If θ = θ then the formulas hold true 1 d d = d d +2(x x ,ξ ), i,j = 1,...,M, i − j ′1i − ′1j ′j − ′i ′ d d = d +2(x x ,ξ ), i − 1 ′1i ′1 − ′i ′ (28) d(t) d(t) = d d +2(x x ,ξ +ση ) = d d +2σ(x x ,η ), i − j ′1i − ′1j ′j − ′i ′ ′ i − j ′j − ′i ′ d(t) d(t) = d +2(x x ,ξ +ση ) = d d +2σ(x x ,η ). i − 1 ′1i ′1 − ′i ′ ′ i − 1 ′1 − ′i ′ If, in particular, x(1)t = x , x(2)t = x , and x(3)t = x , i 3, then in the case 2 it is ′ ′1 ′ ′2 ′ ′i ≥ S necessary to have d(t) d(t) = 2(x x ,ξ +ση ) 2A τ . (29) 3 − 2 ′2 − ′i ′ ′ ≥ 1 2 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.