ebook img

On Renyi Entropy Power Inequalities PDF

0.45 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On Renyi Entropy Power Inequalities

1 On Re´nyi Entropy Power Inequalities Eshed Ram Igal Sason Abstract 6 1 n ThispapergivesimprovedRe´nyientropypowerinequalities(R-EPIs).ConsiderasumSn = k=1Xk 0 of n independentcontinuousrandomvectorstakingvalueson Rd, andlet α [1, ]. An R-EPI provides 2 ∈ ∞ P alowerboundontheorder-αRe´nyientropypowerofSn that,uptoamultiplicativeconstant(whichmay l u depend in general on n,α,d), is equal to the sum of the order-α Re´nyi entropy powers of the n random J vectors Xk nk=1. For α = 1, the R-EPI coincides with the well-known entropy power inequality by { } 0 Shannon.ThefirstimprovedR-EPIisobtainedbytighteningtherecentR-EPIbyBobkovandChistyakov 2 which relies on the sharpened Young’s inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix. ] T Keywords: Re´nyi entropy, entropy power inequality, Re´nyi entropy power. I . s c I. INTRODUCTION [ One of the well-known inequalities in information theory is the entropy power inequality 3 v (EPI) which has been introduced by Shannon [41, Theorem 15]. Let X be a d-dimensional 5 random vector with a probability density function, let h(X) be its differential entropy, and let 5 N(X) = exp 2 h(X) be the entropy power of X. The EPI states that for independentrandom 5 d 6 vectors {Xk}nk(cid:0)=1, the (cid:1)following inequality holds: 0 . n n 1 N X N(X ) (1) 0 k ≥ k ! 6 Xk=1 Xk=1 1 with equality in (1) if and only if X n are Gaussian random vectors with proportional : { k}k=1 v covariances. i X The EPI has proved to be an instrumental tool in proving converse theorems for the capacity r region of the Gaussian broadcast channel [6], the Gaussian wire-tap channel [30], the capacity a region of the Gaussian broadcast multiple-input multiple-output (MIMO) channel [49], and a converse theorem in multi-terminal lossy compression [35]. Due to its importance, the EPI has been proved with information-theoretic tools in several insightful ways (see, e.g., [7], [18], [22], [27, Appendix D], [37], [44], [46]); e.g., the proof in [46] relies on fundamental relations between information and estimation measures ([21], [23]), together with the simple fact that for estimating a sum of two random variables, it is preferable to have access to the individual noisy measurements rather than to their sum. More studies on the theme include EPIs for discrete random variables and some analogies [24], [25], [26], [29], [40], [42], [50], generalized EPIs [31], [32], [52], reverse EPIs [10], [11], [34], [51], related inequalities to the EPI in terms of rearrangements [47], and some refined versions of the EPI for specialized distributions [15], [16], [25], [45]. An overview on EPIs is provided in [1]; we also refer the reader to a preprint E. Ram and I. Sason are with the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion–Israel Institute of Technology, Haifa 32000, Israel. E-mails: {s6eshedr@tx, sason@ee}.technion.ac.il. Thisworkhasbeensupported bytheIsraeliScienceFoundation(ISF)under Grant12/12. Ithasbeen presentedin part at the 2016 IEEE International Symposium on Information Theory, Barcelona, Spain, July 10–15, 2016. 2 of a recent survey paper by Madiman et al. [34] which addresses forward and reverse EPIs with Re´nyi measures, and their connections with convex geometry. The Re´nyi entropy and divergence have been introduced in [36], and they evidence a long track record of usefulness in information theory and its applications. Recent studies of the properties of these Re´nyi measures have been provided in [19], [20] and [43]. In the following, the differential Re´nyi entropy and the Re´nyi entropy power are introduced. Definition 1 (Differential Re´nyi entropy): LetX bearandomvectorwhichtakesvaluesinRd, and assume that it has a probability density function which is designated by f . The differential X Re´nyi entropy of X of order α (0,1) (1, ), denoted by h (X), is given by α ∈ ∪ ∞ 1 h (X) = log fα(x)dx (2) α 1 α X ! − RZd α = log f . (3) X α 1 α k k − The differential Re´nyi entropies of orders α = 0,1, are defined by the continuous extension ∞ of h (X) for α (0,1) (1, ), which yields α ∈ ∪ ∞ h (X) = log λ supp(f ) , (4) 0 X h (X) = h(X)(cid:0)= f (cid:1)(x) logf (x)dx, (5) 1 X X − RZd h (X) = log esssup(f ) (6) ∞ X − where λ in (4) is the Lebesgue measure i(cid:0)n Rd. (cid:1) Definition 2 (Re´nyi entropy power): For a d-dimensional random vector X with density, the Re´nyi entropy power of order α [0, ] is given by ∈ ∞ N (X) = exp 2h (X) . (7) α d α Since h (X) is specialized to the Shannon (cid:0)entropy h(cid:1)(X) for α = 1, the possibility of α generalizing the EPI with Re´nyi entropy powers has emerged. This question is stated as follows: Question 1: Let X be independentd-dimensional random vectors with probability density k { } functions, and let α [0, ] and n N. Does a Re´nyi entropy power inequality (R-EPI) of the ∈ ∞ ∈ form n n N X c(n,d) N (X ) (8) α k ≥ α α k ! k=1 k=1 X X (n,d) hold for some positive constant c (which may depend on the order α, dimension d, and α number of summands n) ? In [28, Theorem 2.4], a sort of an R-EPI for the Re´nyi entropy of order α 1 has been ≥ derived with some analogy to the classical EPI; this inequality, however, does not apply the usual convolution unless α = 1. In [47, Conjectures 4.3, 4.4], Wang and Madiman conjectured an R-EPI for an arbitrary finite number of independent random vectors in Rd for α> d . d+2 Question1 hasbeenrecentlyaddressedby BobkovandChistyakov[9], showingthat(8) holds with cα = 1e αα−11, ∀α> 1 (9) 3 independently of the values of n,d. It is the purpose of this paper to derive some improved R-EPIs for α > 1 (the case of α = 1 refers to the EPI (1)). A study of Question 1 for α (0,1) ∈ is currently an open problem (see [9, p. 709]). In view of the close relation in (3) between the (differential) Re´nyi entropy and the L norm, α the sharpened version of Young’s inequality plays a key role in [9] for the derivation of an R-EPI, as well as in our paper for the derivation of some improved R-EPIs. The sharpened version of Young’s inequality was also used by Dembo et al. [18] for proving the EPI. For α (1, ), let α′ = α be Ho¨lder’s conjugate. For α > 1, Theorem 1 provides a new ∈ ∞ α−1 tighter constant in comparison to (9) which gets the form nα′−1 cα(n) = αα−11 1− n1α′ (10) (cid:18) (cid:19) independently of the dimension d. The new R-EPI with the constant in (10) asymptotically coincides with the tight bound by Rogozin [38] when α and n = 2, and it also → ∞ asymptotically coincides with the R-EPI in [9] when n . Moreover, the R-EPI with the → ∞ new constant in (10) is further improved in Theorem 2 by a more involved analysis which relies on convex analysis and some interesting results from matrix theory; the latter result yields a closed-form solution for n = 2. This paper is organized as follows: In Section II, preliminary material and notation are introduced. A new R-EPI is derived in Section III for α > 1, and special cases of this improved boundarestudied.SectionIVderivesastrengthenedR-EPIforasumofn 2randomvariables; ≥ for n = 2, it is specialized to a bound which is expressed in a closed form; its computation for n > 2 requires a numerical optimization which is easy to perform. Section V exemplifies numerically the tightness of the new R-EPIs in comparison to some previously reported bounds, and finally Section VI summarizes the paper. II. ANALYTICAL TOOLS This section includes notation and tools which are essential to the analysis in this paper. It starts with the sharpened Young’s inequality, followed by results on rank-one modification of a symmetric eigenproblem [14]. We also include here some properties of the differential Re´nyi entropy and Re´nyi entropy power which are useful to the analysis in this paper. A. Basic Inequalities The derivation of the R-EPIs in this work partially relies on the sharpened Young’s inequality and the monotonicity of the Re´nyi entropy in its order. For completeness, we introduce these results in the following. Notation 1: For α > 0, let α′ = α , i.e., 1 + 1 = 1. α−1 α α′ Note that α > 1 if and only if α′ > 0; if α = 1, we define α′ = . This notation is known as ∞ Ho¨lder’s conjugate. Fact 1 (Monotonicity of the Re´nyi entropy): TheRe´nyientropy,h (X),ismonotonicallynon- α increasing in α. From (3), it follows that for α (0,1) (1, ), if f is a probability density function of a ∈ ∪ ∞ d-dimensional vector X, then h (X) = log f α′ . (11) α − k kα A useful consequence of Fact 1 and (11) is the foll(cid:0)owing r(cid:1)esult (a weaker version of it is given in [9, Lemma 1]): 4 Corollary 1: Let α (0,1) (1, ), and let f Lα(Rd) be a probability density function ∈ ∪ ∞ ∈ (i.e., f is a non-negative function with f = 1). Then, for every β (0,α) with β = 1, 1 k k ∈ 6 f β′ f α′. (12) k kβ ≤ k kα Notation 2: For every t (0,1) (1, ), let ∈ ∪ ∞ At = t1t t′ −|t1′| (13) | | and let A = A = 1. Note that for t [0, ] 1 ∞ ∈ ∞ 1 A = . (14) t′ A t The sharpened Young’s inequality, first derived by Beckner [4] and re-derived with alternative proofs in, e.g., [3] and [13] is given as follows: Fact 2 (Sharpened Young’s inequality): Let p,q,r > 0 satisfy 1 1 1 + = 1+ , (15) p q r letf Lp(Rd)andg Lq(Rd) benon-negativefunctions,andletf g denotetheirconvolution. ∈ ∈ ∗ • If p,q,r > 1, then d ApAq 2 f g f g . (16) r p q k ∗ k ≤ A k k k k (cid:18) r (cid:19) • If p,q,r < 1, then d ApAq 2 f g f g . (17) r p q k ∗ k ≥ A k k k k (cid:18) r (cid:19) Furthermore, (16) and (17) hold with equalities if and only if f and g are Gaussian probability density functions. Notethattheconditionin(15)canbeexpressedintermsoftheHo¨lder’sconjugatesasfollows: 1 1 1 + = . (18) p′ q′ r′ By using (18) and mathematical induction, the sharpened Young’s inequality can be extended to more than two functions as follows: Corollary 2: Let ν, ν n >0 satisfy n 1 = 1, let { k}k=1 k=1 νk′ ν′ P1 n d2 A= A (19) A νk ν ! k=1 Y where the right side in (19) is defined by (13), and let fk Lνk(Rd) be non-negative functions. ∈ • If ν,{νk}nk=1 > 1, then n f ... f A f . (20) k 1∗ ∗ nkν ≤ k kkνk k=1 Y • If ν,{νk}nk=1 < 1, then n f ... f A f (21) k 1∗ ∗ nkν ≥ k kkνk k=1 Y with equalities in (20) and (21) if and only if f are scaled versions of Gaussian probability k densities for all k. 5 B. Rank-One Modification of a Symmetric Eigenproblem This section is based on a paper by Bunch et al. [14] which addresses the eigenvectors and eigenvalues (a.k.a. eigensystem) of rank-one modification of a real-valued diagonal matrix. We use in this paper the following result [14]: Fact 3: Let D Rn×n be a diagonal matrix with the eigenvalues d d ... d . Let 1 2 n ∈ ≤ ≤ ≤ z Rn such that z = 1 and let ρ R. Let λ λ ... λ be the eigenvalues of the 2 1 2 n ∈ k k ∈ ≤ ≤ ≤ rank-one modification of D which is given by C = D+ρzzT. Then, 1) λ = d +ρµ , where n µ = 1 and µ 0 for all i 1,...,n . i i i i=1 i i ≥ ∈ { } 2) If ρ > 0, then the following interlacing property holds: P d λ d λ ... d λ (22) 1 1 2 2 n n ≤ ≤ ≤ ≤ ≤ ≤ and, if ρ < 0, then λ d λ d ... λ d . (23) 1 1 2 2 n n ≤ ≤ ≤ ≤ ≤ ≤ 3) If all the eigenvalues of D are different, all the entries of z are non-zero, and ρ= 0, then 6 inequalities (22) and (23) are strict. For i 1,...,n , the eigenvalue λ is a zero of i ∈ { } n z2 W(x)= 1+ρ i . (24) d x j j=1 − X Note that the requirement z = 1 can be relaxed to z = 0 by letting zˆ= z and ρˆ= ρ z 2. k k2 6 kzk2 k k2 C. Re´nyi Entropy Power We present some properties of the differential Re´nyi entropy and Re´nyi entropy power which are useful in this paper. • In view of (3) and (7), for α (0,1) (1, ), ∈ ∪ ∞ Nα(X) = ( fX α)−2αd′ . (25) k k • The differential Re´nyi entropy hα(X) is monotonically non-increasing in α, and so is N (X). α • If Y = AX +b where A Rd×d, A = 0, b Rd, then for all α [0, ] ∈ | | 6 ∈ ∈ ∞ h (Y) = h (X)+log A, (26) α α | | 2 Nα(Y)= A d Nα(X). (27) | | This implies that the Re´nyi entropy power is a homogeneous functional of order 2 and it is translation invariant, i.e., N (λX) = λ2N (X), λ R, (28) α α ∀ ∈ N (X +b) = N (X), b Rd. (29) α α ∀ ∈ In view of (28) and (29), N (X) has some similar properties to the variance of X. However, α if we consider a sum of independent random vectors then Var( n X ) = n Var(X ) k=1 k k=1 k whereas the Re´nyi entropy power of a sum of independent random vectors is not equal, in P P general, to the sum of the Re´nyi entropy powers of the individual random vectors (unless these independent vectors are Gaussian with proportional covariances). The continuation of this paper considers R-EPIs for orders α (1, ]. The case where α = 1 ∈ ∞ refers to the EPI by Shannon [41, Theorem 15]. 6 III. A NEW RE´NYI EPI In the following, a new R-EPI is derived. This inequality, which is expressed in closed-form, is tighter than the R-EPI in [9, Theorem I.1]. Theorem 1: Let X n be independent random vectors with densities defined on Rd, and let n N, α> 1, α{′ =k}kα=1and S = n X . Then, the following R-EPI holds: ∈ α−1 n k=1 k P n N (S ) c(n) N (X ) (30) α n ≥ α α k k=1 X with nα′−1 cα(n) =αα−11 1− n1α′ . (31) (cid:18) (cid:19) Furthermore, the R-EPI in (30) has the following properties: 1) Eq. (30) improves the R-EPI in [9, Theorem I.1] for every α > 1 and n N, ∈ 2) For all α >1, it asymptotically coincides with the R-EPI in [9, Theorem I.1] as n , → ∞ 3) In the other limiting case where α 1, it coincides with the EPI (similarly to [9]), 4) If n= 2 and α , the constantc↓(n) in (31) tends to 1 which is optimal; this constantis → ∞ α 2 achievedwhenX andX areindependentrandomvectorswhichareuniformlydistributed 1 2 in the cube [0,1]d. Proof: In the first stage of this proof, we assume that N (X ) >0, k 1,...,n (32) α k ∈ { } which, in view of (25), implies that f Lα(Rd), where f is the density of X for all Xk ∈ Xk k k 1,...,n . In [9, (12)] it is shown that for α> 1, ∈ { } n N (S ) B Ntk(X ) (33) α n ≥ α k k=1 Y with −α′ B = A ...A A , (34) ν1 νn α′ ν > 1, k 1,...,n , (35) k (cid:0) (cid:1) ∀ ∈{ } ν ν′ = , ν R, (36) ν 1 ∀ ∈ − n 1 1 = , (37) ν′ α′ k=1 k X α′ t = , k 1,...,n . (38) k ν′ ∀ ∈ { } k Consequently, (35)–(38) yields t > 0, k 1,...,n , (39) k ∀ ∈ { } n t = 1. (40) k k=1 X The proof of (33), which relies on Corollaries 1 and 2, is introduced in Appendix A. 7 Similarly to [9, (14)], in view of the homogeneity of the entropy power functional (see (28)), it can be assumed without any loss of generality that n N (X )= 1. (41) α k k=1 X Hence, to prove (30), it is sufficient to show that under the assumption in (41) N (S ) c(n). (42) α n ≥ α From this point, we deviate from the proof of [9, Theorem I.1]. Taking logarithms on both sides of (33) and assembling (13), (34)–(40) and (41) yield logN (S ) f (t), (43) α n 0 ≥ where t = (t ,...,t ), and 1 n n logα t t f (t)= D(t N )+α′ 1 k log 1 k , (44) 0 α 1 − k α − α′ − α′ − k=1(cid:18) (cid:19) (cid:18) (cid:19) X N = (N (X ),...,N (X )), (45) α α 1 α n n t k D(t N )= t log . (46) k α k N (X ) k=1 (cid:18) α k (cid:19) X In view of (39) and (40), the bound in (43) holds for every t Rn such that n t = 1. ∈ + k=1 k Consequently, the R-EPI in [9, Theorem I.1] can be tightened by maximizing the right side of P (43), leading to the following optimization problem: maximize f (t) 0 subject to t 0, k 1,...,n , (47) k n≥ t = 1∈.{ } k=1 k Note that the convexity of the function P x x f(x)= 1 log 1 , x [0,α′] (48) − α′ − α′ ∈ (cid:16) (cid:17) (cid:16) (cid:17) yields that the third term on the right side of (44) is convex in t. Since the relative entropy D(t N ) is also convex in t, the objective function f in (44) is expressed as a difference of α 0 k two convex functions in t. In order to get an analytical closed-form lower bound on the solution of the optimization problem in (47), we take the sub-optimal choice t = N (similarly to the α proof[9,TheoremI.1])whichyieldsthatD(t N ) = 0;however,ourproofderivesanimproved k α lower bound on the third term of f (t) which needs to be independent of N . Let 0 α tˆ = N (X ), 1 k n, (49) k α k ≤ ≤ then, in view of (43) and (49), logN (S ) f (tˆ) (50) α n 0 ≥ logα n tˆ tˆ = +α′ 1 k log 1 k . (51) α 1 − α′ − α′ − k=1(cid:18) (cid:19) (cid:18) (cid:19) X Due to the convexity of f in (48), for all k 1,...,n , ∈ { } f(tˆ ) f(x)+f′(x)(tˆ x). (52) k k ≥ − 8 Choosing x = 1 in the right side of (52) yields n tˆ tˆ 1 loge tˆ 1 1 k log 1 k log 1 + k loge+log 1 (53) − α′ − α′ ≥ − nα′ nα′ − α′ − nα′ (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) (cid:20) (cid:18) (cid:19)(cid:21) and, in view of (41) and (49) which yields n tˆ = 1, summing over k 1,...,n on both k=1 k ∈{ } sides of (53) implies that P n tˆ tˆ 1 α′ 1 k log 1 k (nα′ 1)log 1 . (54) − α′ − α′ ≥ − − nα′ k=1(cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) X (n) Finally, assembling (50), (51) and (54) yields (42) with c in (31) as required. α In the sequel, we no longer assume that condition (32) holds. Define = k 1,...,n : N (X ) = 0 , (55) 0 α k K { ∈ { } } and note that h (S ) = h X + X (56) α n α k k   kX∈/K0 kX∈K0   h X + X X (57) ≥ α k k { k}k∈K0 kX∈/K0 kX∈K0 (cid:12) (cid:12)  (cid:12)  = h X (58) α k   kX∈/K0   where the conditional Re´nyi entropy is defined according to Arimoto’s proposal in [2] (see also [20, Section 4]), (57) is due to the monotonicity property of the conditional Re´nyi entropy (see [20, Theorem 2]), and (58) is due to the independence of X ,...,X . Since N (X ) > 0 for 1 n α k every k / , then from the previous analysis 0 ∈K N X c(l) N (X ), (59) α k ≥ α α k kX∈/K0 kX∈/K0   (n) where l = n . In view of (31), it can be verified that c is monotonically decreasing in 0 α −|K | (l) (n) n; hence, (58), (59) and c c yield α α ≥ n N (S ) c(n) N (X ). (60) α n ≥ α α k k=1 X We now turn to prove Items 1)–4). • To prove Item 1), note that (9) and (31) yield that c(αn) > cα for all α > 1 and n N. ∈ • Item 2) holds since from (31) nl→im∞c(αn) = 1e α1−1α (61) where the right side of (61) coincides with the constant c in [9, (3)] (see (9)). α • Item 3) holds since α 1 yields α′ , which implies that for every n N ↓ → ∞ ∈ limc(n) = limc = 1. (62) α α α↓1 α↓1 9 Hence, by letting α tend to 1, (30) and (62) yield the EPI in (1). • To prove Item 4), note that from (31) n−1 1 lim c(n) = 1 (63) α→∞ α − n (cid:18) (cid:19) which is monotonically decreasing in n for n 2, being equal to 1 for n = 2 and 1 by ≥ 2 e letting n tend to . Let X be a d-dimensional random vector with density f , and let X ∞ M(X) := esssup(f ). (64) X From (6), (7) and (64), it follows that N (X) := lim N (X) (65) ∞ α α→∞ =M−d2(X). (66) By assembling (30) and (66), it follows that if X ,...,X are independent d-dimensional 1 n random vectors with densities then n−1 n M−d2(Sn) 1 1 M−d2(Xk). (67) ≥ − n (cid:18) (cid:19) k=1 X Thisimprovesthe tightnessoftheinequality in[8, Theorem1]wherethe coefficient 1 1 n−1 − n ontherightsideof(67)hasbeenloosenedto 1 (note,however,thattheycoincidewhenn ). e (cid:0) →(cid:1)∞ For n = 2, the coefficient 1 on the right side of (67) is tight, and it is achievedwhen X and X 2 1 2 are independent random vectors which are uniformly distributed in the cube [0,1]d [8, p. 103]. 1 n=2 n=3 0.9 n=10 n→∞ 0.8 0.7 n()cα 0.6 0.5 0.4 0.3 101 102 103 α Fig. 1. A plot of c(αn) in (31), as a function of α, for n=2,3,10 and n→∞. (n) Figure 1 plots c as a function of α, for some valuesof n, verifying numerically Items 1)–4) α (n) in Theorem 1. In [9, Theorem I.1], c is independent of n, and it is equal to c in (8) which α α (n) is the limit of c in (31) by letting n (the solid curve in Figure 1). α → ∞ 10 Remark 1: For independentrandom variables X n with densities on R, the result in (67) { k}k=1 with d = 1 can be strengthened to (see [8, p. 105] and [38]) n 1 1 1 (68) M2(S ) ≥ 2 M2(X ) n k k=1 X where S := n X . Note that (67) and (68) coincide if n = 2 and d = 1. n k=1 k Example 1: Let X and Y be d-dimensional random vectors with densities f and f , X Y P respectively, and assume that the entries of X are i.i.d. as well as those of Y. Let X , X , 1 2 Y , Y be independent d-dimensional random vectors where X ,X are independent copies of 1 2 1 2 X, and Y ,Y are independent copies of Y. Assume that 1 2 Pr[X = X ] = α, 1,k 2,k (69) Pr[Y = Y ] = β 1,k 2,k for all k 1,...,n . We wish to obtain an upper bound on the probability that X +Y and 1 1 ∈ { } X +Y are equal. From (3), (7) (with α = 2), and (69) 2 2 N (X) = exp 2 h (X) (70) 2 d 2 2 (cid:0) (cid:1) −d = f2(x)dx (71) X (cid:18)ZRd (cid:19) = P−d2[X1 = X2] (72) d = P−d2[X1,k = X2,k] (73) k=1 Y = α−2, (74) N (Y) = β−2, (75) 2 N2(X +Y)= P−d2[X1+Y1 = X2+Y2]. (76) Assembling (30) with n = α= 2, (74), (75) and (76) yield P(X +Y = X +Y ) 27 α−2+β−2 −d2 . (77) 1 1 2 2 ≤ 32 The factor 27 on the base of the exponent on the (cid:0)righ(cid:0)t side of (77(cid:1)),(cid:1)instead of the looser factor 32 c = 2 which follows from (9) with α = 2 (see [9, Theorem I.1]), improves the exponential 2 e decay rate of the upper bound in (77) as a function of the dimension d. The optimal bound has to be with a coefficient of α−2 +β−2 on the base of the exponent in the right side of (77) which is less than or equal to 1; this can be verified since if X and Y are independentGaussian (cid:0) (cid:1) random variables, then N (X +Y)= N (X)+N (Y), (78) 2 2 2 so, d P(X +Y = X +Y )= α−2+β−2 −2. (79) 1 1 2 2 This provides a reference for comparing the expone(cid:0)ntial decay w(cid:1)hich is implied by c in (9), 2 (2) c in (30), and the case where X and Y are independent Gaussian random variables: 2 2 27 < < 1. (80) e 32

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.