ebook img

Hypothesis Test for Upper Bound on the Size of Random Defective Set PDF

0.12 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Hypothesis Test for Upper Bound on the Size of Random Defective Set

Hypothesis Test for Upper Bound on the Size of Random Defective Set A.G. D’yachkov, I.V. Vorobyev, N.A. Polyanskii, V.Yu. Shchukin Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, Moscow, Russia, 7 [email protected], [email protected], [email protected], [email protected] 1 0 2 Abstract. Let 1 ≤ s < t, N ≥ 1 be fixed integers and a complex electronic circuit of n size t is said to be an s-active, s ≪ t, and can work as a system block if not more than s a elements of the circuit are defective. Otherwise, the circuit is said to be an s-defective and J 2 should be replaced by a similar s-active circuit. Suppose that there exists a possibility to run 2 N non-adaptive group tests to check the s-activity of the circuit. As usual, we say that a (disjunctive) group test yields the positive response if the group contains at least one defective ] T element. In this paper, we will interpret the unknown set of defective elements as a random set I and discuss upper bounds on the error probability of the hypothesis test for the null hypothesis . s {H : the circuit is s-active} verse the alternative hypothesis {H : the circuit is s-defective}. c 0 1 [ Along with the conventional decoding algorithm based on the known random set of positive 1 responses and disjunctive s-codes, we consider a T-weight decision rule which is based on the v simple comparison of a fixed threshold T, 1 ≤ T ≤ N −1, with the known random number of 1 positive responses p, 0 ≤ p ≤ N. 0 2 Keywords: Hypothesis test, group testing, random set of positive responses, disjunctive 6 codes, maximal error probability, error exponent, random coding bounds. 0 . 1 0 1 Statement of Problem 7 1 v: Let N ≥ 2, t ≥ 2, s and T be integers, where 1 ≤ s < t and 1 ≤ T < N. The symbol , i denote the equality by definition, |A| – the size of the set A and [N] , {1,2,...,N} – the set of X integers from 1 to N. A binary (N ×t)-matrix r a X = kx (j)k, x (j) = 0,1, i∈ [N], j ∈ [t], i i x(j) , (x (j),...,x (j)), x , (x (1),...,x (t)), 1 N i i i with t columns (codewords) x(j), j ∈ [t], and N rows x , i ∈ [N], is called a binary code of i length N and size t = ⌊2RN⌋, where a fixed parameter R > 0 is called a rate of the code X. The number of 1’s in a binary column x = (x ,...,x ) ∈ {0,1}N, i.e., |x| , N x , is called 1 N i=1 i a weight of x. A code X is called a constant weight binary code of weight w, 1 ≤ w < N, if for P any j ∈ [t], the weight |x(j)| = w. The conventional symbol u v will be used to denote the disjunctive (Boolean) sum of binary columns u,v ∈ {0,1}N. We say that a column u covers a W column v if u v = u. W 1.1 Disjunctive and Threshold Disjunctive Codes Definition 1. [4]. A code X is called a disjunctive s-code, s ∈ [t−1], if the disjunctive sum of any s-subset of codewords of X covers those and only those codewords of X which are the terms of the given disjunctive sum. 1 Let S, S ⊂ [t], be an arbitrary fixed collection of defective elements of size |S|. For a binary code X and collection S, define the binary response vector of length N, namely: x(j) if S =6 ∅, x(S) , j∈S (W0,0,...,0) if S = ∅.  In the classical problem of non-adaptive group testing, we describe N tests as a binary (N ×t)- matrix X = kx (j)k, whereacolumnx(j)correspondsto thej-th element, arow x corresponds i i to the i-th test and x (j) , 1 if and only if the j-th element is included into the i-th testing i group. The result of each test equals 1 if at least one defective element is included into the testing group and 0 otherwise, so the column of results is exactly equal to the response vec- tor x(S). Definition 1 of disjunctive s-code X gives the important sufficient condition for the evident identification of any unknown collection of defective elements S if the number of defec- tive elements |S| ≤ s. In this case, the identification of the unknown S is equivalent to discovery of all codewords of code X covered by x(S), and its complexity is equal to the code size t. Note that this algorithm also allows us to check s-activity of the circuit defined in the abstract. Moreover, itis easy to prove by contradiction that every code X which allows tocheck s-activity of the circuit without error is disjunctive s-code. Indeed, if code X is not disjunctive s-code, then there exist a set S ⊂ [t], |S| = s, and a number j ∈ [t]\S such that x(S) = x(S ∪{j}). Proposition 1. The results of non-adaptive group tests specified by code X allow to check s-activity of the circuit if and only if X is disjunctive s-code. Definition 2. Let s, s ∈ [t − 1], and T, T ∈ [N − 1], be arbitrary fixed integers. A disjunctive s-code X of length N and size t is said to be a disjunctive s-code with threshold T (or, briefly, sT-code) if the disjunctive sum of any ≤ s codewords of X has weight ≤ T and the disjunctive sum of any ≥ s+1 codewords of X has weight ≥ T +1. Obviously, for any s and T, the definition of sT-code gives a sufficient condition for code X applied to the group testing problem described in the abstract of our paper. In this case, only on the base of the known number of positive responses |x(S)|, we decide that the controllable circuit identified by an unknown collection S, S ⊂[t], is s-active, i.e., the unknown size |S| ≤ s (s-defective, i.e., the unknown size |S| ≥ s+1) if |x(S)| ≤ T (|x(S)| ≥ T +1). Remark 1. The concept of sT-codes was motivated by troubleshooting in complex elec- tronic circuits using a non-adaptive identification scheme which was considered in [7]. Remark 2. A similar model of special disjunctive s-codes was considered in [3], wherethe conventional disjunctive s-code is supplied with an additional condition: the weight |x(S)| of the response vector of any subset S, S ⊂ [t], |S| ≤ s, is at most T. Note that these codes have a weaker condition than our sT-codes. In [3] authors motivate their group testing model with boundedweight of the responsevector by a risk for the safety of the persons,that performtests, in some contexts, when the number of positive test results is too large. 1.2 Hypothesis Test Let a circuit of size t beidentified by an unknowncollection S , S ⊂ [t], of defective elements un un of anunknownsize|S |andX is anarbitrarycodeof sizetandlength N. Given anyfixedinte- un ger parameters s, 1≤ s < t, and T, 1≤ T < N, introduce the null hypothesis {H : |S |≤ s} 0 un (the circuit is s-active) verse the alternative {H : |S |≥ s+1} (the circuit is s-defective), 1 un 2 and consider the following threshold decision rule motivated by Definition 2, namely: accept {H : |S | ≤ s} if |x(S )| ≤ T, 0 un un (1) (accept {H1 : |Sun| > s} if |x(Sun)| > T. Forareasonableprobabilisticinterpretation ofthedecisionrule(1), weassumethatthedifferent collections of defective elements of the same size are equiprobable. That is why, we set that the probability distribution of the random collection S , S ⊂ [t], is identified by an unknown un un probability vector p , (p ,p ,...,p ), p ≥ 0, k = 0,1,...,t, t p = 1, as follows: 0 1 t k k=0 k Pr{S = S} , p|S| for any SP⊆ [t]. (2) un t |S| (cid:0) (cid:1) Introduce a maximal error probability of the decision rule (1): ε (T,p,X) , max Pr{accept H H }, Pr{accept H H } (3) s 1 0 0 1 (cid:8) (cid:12) (cid:12) (cid:9) where the conditional probabilities in the right-han(cid:12)d side of (3) are de(cid:12)fined by (1)-(2). Note that the number ε (T,p,X) = 0 if and only if the code X is an sT-code. Denote by t (N,T) s s the maximal size of sT-codes of length N. For a parameter τ, 0 < τ < 1, introduce the rate of s⌊τN⌋-codes: log t (N,⌊τN⌋) R (τ) , lim 2 s ≥ 0. s N→∞ N Definition 3. Let τ,0 < τ < 1, andaparameter R,R > R (τ), befixed. For themaximal s error probability ε (T,p,X), defined by (1)-(3), consider the function s εN(τ,R) , max min ε (⌊τN⌋,p,X) , (4) s s p X (cid:26) (cid:27) where the minimum is taken over all codes X of length N and size t = ⌊2RN⌋. The number εN(τ,R) > 0 does not depend on the unknown probability vector p and can be called the s universal error probability of the decision rule (1). The corresponding error exponent −log εN(τ,R) E (τ,R) , lim 2 s , s ≥ 1, (5) s N→∞ N identifies the asymptotic behavior of the maximal error probability of the decision rule (1): exp {−N [E (τ,R)+o(1)]}, N → ∞, if E (τ,R) > 0. 2 s s Along with (1) we introduce the disjunctive decision rule based on the conventional algo- rithm: accept H if x(S ) covers ≤ s columns of X, 0 un (accept H1 if x(Sun) covers > s columns of X. For a fixed code rate R, R > 0, the error exponent for disjunctive decision rule E (R) is defined s similarly to(3)-(5). Thefunction E (R)was firstly introducedin ourpaper[4], whereweproved s Theorem 1. [4]. If R ≥ 1/s, then E (R) = 0. s 3 Remark 3. In our paper we will focus on the test of hypotheses H and H , provided 0 1 that the unknown defective set S is a random set with probability distribution (2). A similar un statistical problemofconstructingconfidenceintervalforthesize|S |ofunknown(nonrandom) un defective set S was considered in [2, 6]. The authors of [2] present a randomized algorithm un thatusesG(ǫ,c)log tnon-adaptivetests andproducesthestatistic sˆ, thatsatisfies thefollowing 2 properties: probability Pr{sˆ < |S |} is upper bounded by a small parameter ǫ ≪ 1 and the un expected value of sˆ/|S | is upper bounded by a number c > 1. In [6] an adaptive randomized un algorithm is proposed (algorithm is called adaptive if the next test is constructed based on the results of the previous tests). It uses at most 2log log |S |+O( 1 log 1) adaptive tests and 2 2 un δ2 2 ǫ estimates |S | up to a multiplicative factor of 1±δ with error probability ≤ ǫ. Note that the un estimating of |S | is a subtask of a non-standard group testing problem where is no restriction un |S | ≤ s. Another approach to solving this problem was considered in paper [1] where the un authors propose to run a fixed non-adaptive tests on the firststage and to test individually each of unresolved after stage 1 elements on the second stage. For probability distribution Pr{j ∈ S } = p, ∀i∈ [t], un and some dependencies p , p(t), the lower and upper bounds on asymptotics of the expected number of tests in described 2-stage procedure are obtained in [1]. 2 Lower Bounds on Error Exponents In this Section, we formulate and compare random coding lower bounds for the both of error exponents E (R) and E (τ,R). These bounds were proved applying the random coding method s s based on the ensemble of constant-weight codes. A parameter Q in formulations of theorems 2- 3 means the relative weight of codewords of constant-weight codes. Introduce the standard notations h(Q) , −Qlog Q−(1−Q)log [1−Q], 2 2 [x]+ , max{x,0}. In [4], we established Theorem 2. [4]. The error exponent E (R) ≥ E (R) where the random coding lower bound s s E (R) , max min A(s,Q,q)+[h(Q)−qh(Q/q)−R]+ , s 0<Q<1 Q≤q<min{1,sQ} (cid:8) Qys 1−y (cid:9) A(s,Q,q) , (1−q)log (1−q)+qlog +sQlog +sh(Q), (6) 2 2 1−y 2 y (cid:20) (cid:21) and y is the unique root of the equation 1−ys q = Q , 0 < y < 1. (7) 1−y In addition, as s → ∞ and R ≤ ln2(1+o(1)), the lower bound E (R) > 0. s s In Section 3 we prove 4 Theorem 3. 1. The error exponent E (τ,R) ≥ E (τ,R) where the random coding bound s s E (τ,R) does not depend on R > 0 and has the form: s E (τ,R) , max min A′(s,Q,τ), A(s+1,Q,τ) , (8) s 1−(1−τ)1/(s+1)<Q<1−(1−τ)1/s (cid:8) (cid:9) A(s,Q,q), if Q ≤ q ≤ sQ, A′(s,Q,q) , (9) (∞, otherwise. 2. As s → ∞ the optimal value of E (τ,R) satisfies the inequality: s log e EThr(s), 0m<τa<x1Es(τ,R) ≥ 4s22 (1+o(1)), s → ∞. (10) Itispossibletousethedecisionrule(1)withanyvalueofparameterT. Thenumericalvalues oftheoptimalerrorexponentE (s)alongwiththecorrespondingoptimalthresholdparameter Thr τ = τ(s) and the constant-weight code ensemble parameter Q = Q(s) are presented in Table 1. Table 1 contains the numbers E (0) , lim E (R) and R (s) , sup{R : E (R) > E (s)} s s Thr s Thr R→0 as well. Theorems 1-3 show that, for large values of the rate parameter R, R > R (s), the Thr threshold decision rule (1) has an advantage over the disjunctive decision rule as N → ∞. Table 1: The numerical values of E (s) and R (s) Thr Thr s 2 3 4 5 6 E (s) 0.1380 0.0570 0.0311 0.0196 0.0135 Thr τ(s) 0.2065 0.1365 0.1021 0.0816 0.0679 Q(s) 0.1033 0.0455 0.0255 0.0163 0.0113 E (0) 0.3651 0.2362 0.1754 0.1397 0.1161 s R (s) 0.2271 0.1792 0.1443 0.1201 0.1027 Thr 3 Proof of Theorem 3 Proof of Statement 1. For a fixed code X and parameters s and T, introduce the sets Bi(T,X), i = 1,2, k = 0,1,...,t, of collections S, S ⊂ [t], |S|= k, as follows: k B1(T,X) , {S : S ⊂ [t], |S|= k, |x(S)| ≥ T +1}, k (11) B2(T,X) , {S : S ⊂ [t], |S|= k, |x(S)| ≤ T}. k Then the probability (3) is represented as s p B1(T,X) t p B2(T,X) ε (T,p,X) , max k k , k k , (12) s s t t t (Xk=0 pl(cid:12)(cid:12) k (cid:12)(cid:12) k=Xs+1 pl(cid:12)(cid:12) k (cid:12)(cid:12)) l=0 (cid:0) (cid:1) l=s+1 (cid:0) (cid:1) P P One can see that, for sets (11) and any k, 0 ≤ k < t, the inequalities t−k |B1 (T,X)| ≥ |B1(T,X)| and k+1 k+1 k k+1 |B2(T,X)| ≥ |B2 (T,X)| k t−k k+1 5 hold. Therefore, from (12) it follows that for any code X, the maximum max ε (T,p,X) in the p s right-hand side of (4) is attained at the probability distribution p = (p ,p ,...,p ) such that 0 1 t p = p = 1/2 and p = 0, j ∈/ {s,s+1}. Therefore, the definition (4) is equivalent to s s+1 j εN(τ,R) , min ε (⌊τN⌋,X), R > R (τ), (13) s s s X:t=⌊2RN⌋ where B1(T,X) B2 (T,X) ε (T,X) , max s , s+1 . (14) s t t ((cid:12) s (cid:12) (cid:12) s+1 (cid:12)) (cid:12) (cid:12) (cid:12) (cid:12) (cid:0) (cid:1) (cid:0) (cid:1) Fix s ≥ 2, 0 < τ < 1, R > R (τ) and a parameter Q, 0 < Q < 1. The bound (8) is s obtained by the method of random coding over the ensemble of binary constant-weight codes [5] defined as the ensemble E(N,t,Q) of binary codes X of length N and size t = ⌊2RN⌋, where the codewords are chosen independently and equiprobably from the set consisting of all N ⌊QN⌋ codewords of a fixed weight ⌊QN⌋. (cid:0) (cid:1) For the ensemble E(N,t,Q), denote the expectation of the error probability (14) by EN(τ,Q,R) , E ε (⌊τN⌋,X) . (15) s s h i Note that there exists a code X of length N and rate R such that its maximal error probabil- ity (14) is upper bounded by EN(τ,Q,R), and due to (13) the following lower bound on the s error exponent (5) of the decision rule (1) is given: −log EN(τ,Q,R) E (τ,R) ≥ max lim 2 s . (16) s 0<Q<1 N→∞ N Further we show that the limit in the right-hand side of (16) exists and its maximum by Q equals (8). The cardinality of set B1(⌊τN⌋,X) can be presented through indicator functions: s B1(⌊τN⌋,X) = S ∈ B1(⌊τN⌋,X) . s s 1 S⊂[t],|S|=s (cid:12) (cid:12) X (cid:8) (cid:9) (cid:12) (cid:12) Therefore, the expectation of the cardinality B1(⌊τN⌋,X) (and similarly, B2 (⌊τN⌋,X) ) s s+1 equals (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) t E B1 = Pr S ∈ B1 |S|= s s s s (cid:18) (cid:19) h(cid:12) (cid:12)i (cid:8) (cid:12) (cid:9) (17) (cid:12) (cid:12) t (cid:12) E B2 = Pr S ∈ B2 |S| = s+1 . s+1 s+1 s+1 ! (cid:18) (cid:19) h(cid:12) (cid:12)i (cid:8) (cid:12) (cid:9) (cid:12) (cid:12) (cid:12) For the ensemble E(N,t,Q), denote the probabilities Pr S ∈ B1(⌊τN⌋,X) |S| = s and s Pr S ∈ B2 (⌊τN⌋,X) |S| = s by P1(τ,Q,N) and P2 (τ,Q,N) correspondingly. It is s+1 s s+(cid:8)1 (cid:12) (cid:9) obvious, that these probabilities depend only on s, τ, Q, N and do not depen(cid:12)d on R. The (cid:8) (cid:12) (cid:9) formulas (17) yield that t(cid:12)he expectation (15) satisfies the inequalities: max P1(τ,Q,N),P2 (τ,Q,N) ≤ EN(τ,Q,R) ≤ P1(τ,Q,N)+P2 (τ,Q,N). (18) s s+1 s s s+1 (cid:8) (cid:9) 6 Given the code X, for a fixed subsetS ⊂ [t], |S| =k, of size k and a fixed integer w, consider a probability PN(Q,w) , Pr x(j) = w . k (cid:12) (cid:12)  (cid:12)(cid:12)j_∈S (cid:12)(cid:12)  (cid:12) (cid:12) Note that the probability PN(Q,w) does not d(cid:12)epend on(cid:12) the choice of the set S and depends k (cid:12) (cid:12)  only on k, w, N and Q. To compute the logarithmic asymptotics of the probabilities in (18), we represent them in the following forms: min{N,s⌊QN⌋} P1(τ,Q,N) = PN(Q,w), s s w=⌊max{τ,Q}N⌋+1 X (19) min{⌊τN⌋,(s+1)⌊QN⌋} P2 (τ,Q,N) = PN (Q,w). s+1 s+1 w=⌊QN⌋ X The logarithmic asymptotics of the probability PN(Q,w) was calculated in [4], it equals k −log PN(Q,⌊qN⌋) lim 2 k = A(k,Q,q), (20) N→∞ N where the function A(k,Q,q) is defined by (6)-(7). Note that P1(τ,Q,N) = 0 if τ > sQ and s P2 (τ,Q,N) = 0 if τ < Q. This remark, (19) and (20) yield s+1 −log P1(τ,Q,N) lim 2 s = min A′(s,Q,q), N→∞ N q∈(i1) −log P2 (τ,Q,N) (21) lim 2 s+1 = min A′(s+1,Q,q), N→∞ N q∈(i2) (i1) , [max{τ,Q},1], (i2) , [0,min{τ,(s+1)Q}], where the function A′(k,Q,q) is defined by (9). Therefore, (18) and (21) yield existence of limit −log EN(τ,Q,R) lim 2 s = min min A′(s,Q,q), min A′(s+1,Q,q) . (22) N→∞ N q∈(i1) q∈(i2) (cid:26) (cid:27) Let us recall some analytical properties of the function A(k,Q,q). Lemma 1. [4]. Function A(k,Q,q) as a function of the parameter q decreases in the interval q ∈ [Q,1−(1−Q)k], increases in the interval q ∈ [1−(1−Q)k,min{1,kQ}] and equals 0 at the point q = 1−(1−Q)k. Hence, min A′(s,Q,q) = 0 if τ ≤ 1−(1−Q)s, q∈(i1) min A′(s+1,Q,q) = 0 if τ ≥ 1−(1−Q)s+1. q∈(i2) It establishes the equivalence of the bound (16)-(22) and the bound (8). 7 Proof of Statement 2. Our aim is to offer the lower bound for the asymptotic behaviour of the expression E (s) , max max min A′(s,Q,τ),A(s+1,Q,τ) , s → ∞. (23) Thr 0<τ<1 1−(1−τ)1/(s+1)<Q<1−(1−τ)1/s (cid:8) (cid:9) For any fixed τ, 0 < τ < 1, and any fixed Q, 1−(1−τ)1/(s+1) < Q < 1−(1−τ)1/s, let us denote the solutions of the equation (7) for A(s,Q,τ) and A(s + 1,Q,τ) by y (Q,τ) and 1 y (Q,τ). Note that y can be greater than 1. It follows from (7) that the parameter τ can be 2 1 expressed in two forms: 1−ys 1−ys+1 τ = Q 1 = Q 2 . 1−y 1−y 1 2 That is why the inequality 1−(1−τ)1/(s+1) < Q ⇔ τ < 1−(1−Q)s+1 is equivalent to 1−ys+1 1−(1−Q)s+1 2 < , 1−y 1−(1−Q) 2 where, for any integer n ≥ 2, the function f(x) = 1−xn increases in the interval x ∈ (0,+∞). 1−x Hence, we have 1−(1−τ)1/(s+1) < Q ⇔ Q < 1−y , 2 and similarly, Q < 1−(1−τ)1/s ⇔ Q > 1−y . 1 In conclusion, the pair of parameters (y ,Q), y > 0, 0 < Q < 1, uniquely defines the 1 1 parameters τ and y . Moreover, if the inequalities 2 0 < τ < 1, Q < 1−y , Q > 1−y . (24) 2 1 hold, then the parameters τ and Q are in the region, in which the maximum (23) is searched. Let some constant c > 0 be fixed, s → ∞ and y ,1−c/s2+o(1/s3). Then, the asymptotic 1 behavior of τ/Q equals 1−ys+1 τ 1−ys c 2 = = 1 = s− +o(1), 1−y Q 1−y 2 2 1 and, therefore, c+2 1 c+2 2 1 y = 1− +o = 1− + +o . 2 (s+1)2 s3 s2 s3 s3 (cid:18) (cid:19) (cid:18) (cid:19) To satisfy the inequalities (24) the parameter Q should be in the interval c 1 c+2 2 1 1−y = +o ,1−y = − +o . 1 s2 s3 2 s2 s3 s3 ! (cid:18) (cid:19) (cid:18) (cid:19) Let us define the parameter Q as Q , d/s2, where d, c < d < c +2, is some constant, and, hence, Q satisfies the previous inequalities. 8 The full list of the asymptotic behaviors of the parameters is presented below: d cd 1 τ = − +o , s 2s2 s2 (cid:18) (cid:19) d Q = , s2 (25) c 1 y = 1− +o , 1 s2 s2 (cid:18) (cid:19) c+2 1 y = 1− +o , s → ∞, 2 s2 s2 (cid:18) (cid:19) where c and d are arbitrary constants such that c > 0, c < d < c+2. The parameters defined by(25)satisfytheinequalities (24), and,therefore,thesubstitutionofasymptoticbehaviors(25) into (23) leads to some lower bound on E (s). Thr Let us calculate the asymptotics of A(s,Q,τ) 1−y 1 = (1−τ)ln(1−τ)+(sQ−τ)ln +s(τ −Q)lny −s(1−Q)ln(1−Q). 1 log e Q 2 (cid:20) (cid:21) The first two terms of asymptotic expansion of the summands equals d cd d2 1 (1−τ)ln (1−τ) = − + + +o , 2 s 2s2 2s2 s2 (cid:18) (cid:19) 1−y cd c 1 1 (sQ−τ)ln = ln +o , Q 2s2 d s2 (cid:20) (cid:21) (cid:18) (cid:19) h i cd 1 s(τ −Q)lny = − +o , 1 s2 s2 (cid:18) (cid:19) d 1 s(1−Q)ln(1−Q)= +o . s s2 (cid:18) (cid:19) Therefore, A(s,Q,τ) d(d−c+cln[c/d]) 1 = +o . log e 2s2 s2 2 (cid:18) (cid:19) Further, let us calculate the asymptotics of A(s+1,Q,τ) 1−y 2 = (1−τ)ln(1−τ)+(sQ−τ)ln +s(τ −Q)lny −s(1−Q)ln(1−Q) 2 log e Q 2 (cid:20) (cid:21) 1−y 2 +Qln +(τ −Q)lny −(1−Q)ln(1−Q). 2 Q (cid:20) (cid:21) The first two terms of asymptotic expansion of the new summands equals 1−y cd c+2 1 2 (sQ−τ)ln = ln +o , Q 2s2 d s2 (cid:20) (cid:21) (cid:20) (cid:21) (cid:18) (cid:19) (c+2)d 1 s(τ −Q)lny = − +o , 2 s2 s2 (cid:18) (cid:19) 1−y d c+2 1 2 Qln = ln +o , Q s2 d s2 (cid:20) (cid:21) (cid:20) (cid:21) (cid:18) (cid:19) 1 (τ −Q)lny = o . 2 s2 (cid:18) (cid:19) 9 Therefore, A(s+1,Q,τ) d(d−c−2+(c+2)ln[(c+2)/d]) 1 = +o . log e 2s2 s2 2 (cid:18) (cid:19) The maximum value c c+2 max max min d d−c+cln ,d d−c−2+(c+2)ln . c>0 c<d<c+2 d d (cid:26) (cid:18) (cid:20) (cid:21)(cid:19)(cid:27) (cid:16) h i(cid:17) is at least 1, that is attained at c → ∞ and d= c+1. (cid:3) 2 4 Simulation for Finite Code Parameters For finite N and t, we carried out a simulation as follows. The probability distribution vector p (2) is defined by p = p = 1/2, p = 0,j ∈/ {s,s+1}, s s+1 j i.e. it is the distribution at which the maximum in the right-hand side of (4) is attained. This fact is proved in the beginningof the proof of Theorem 3. A code X is generated randomly from the ensemble of constant-weight codes, i.e. for some weight parameter w, every codeword of X is chosen independently and equiprobably from the set of all t codewords. For every weight w w and every decision rule, we repeat generation 1000 times and choose the code with minimal (cid:0) (cid:1) error probability. Note that for disjunctive decision rule Pr{accept H |H } = 0. The results 0 1 of simulation are presented in Table 2. The best values of the maximal error probability (3) calculated using the formulas (11) and (14) for fixed parameters s, t and N are shown by the boldface type. Table 2: Results of simulation Threshold decision rule Disjunctive decision rule N Pr{accept H |H } Pr{accept H |H } w T Pr{accept H |H } w 1 0 0 1 1 0 s = 2, t = 15 5 0.2571 0.2571 2 3 0.9333 2 8 0.1619 0.1604 3 5 0.7048 2 10 0 0.1429 1 2 0.4571 3 12 0 0.0857 1 2 0.1810 3 14 0 0.0571 1 2 0.0952 3 15 0 0.0462 2 4 0.0286 3 s = 2, t = 20 5 0.2632 0.2588 2 3 0.9579 2 8 0.1632 0.1649 3 5 0.8316 2 11 0.1053 0.1509 4 7 0.5158 3 12 0.1158 0.1123 4 7 0.4158 3 14 0 0.0842 2 4 0.2316 3 15 0 0.0693 2 4 0.1526 4 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.