February1,2008 ON THE ASYMPTOTIC OF LIKELIHOOD RATIOS FOR SELF-NORMALIZED LARGE DEVIATIONS By Zhiyi Chi ∗ 8 Department of Statistics, University of Connecticut 0 0 2 Motivated by multiple statistical hypothesis testing, we obtain n the limit of likelihood ratio of large deviations for self-normalized a J random variables, specifically, the ratio of P(√n(X¯ +d/n) x V) n ≥ 0 toP(√nX¯ x V),asn ,whereX¯ andV arethesamplemean n 3 ≥ →∞ and standard deviation of iid X ,...,X , respectively, d > 0 is a 1 n ] constant and xn . We show that the limit can have a simple T → ∞ form ed/z0, where z is the unique maximizer of zf(x) with f the S 0 density of X . The result is applied to derive the minimum sample . i h size per test in order to control the error rate of multiple testing at t a a target level, when real signals are different from noise signals only m by a small shift. [ 2 v 1. Introduction. 6 0 5 1.1. Background. SupposeX1,X2,...areiidrandomvariableswithden- 1 sity f, such that P(X > 0) > 0. For n 1, let S = X + +X . We . 1 n 1 n 9 ≥ ··· 0 shall consider the biased t statistic 7 0 √nX¯ S 1 n 1/2 : Tn = , with X¯ = n, V = (Xi X¯)2 . v V n "n − # i=1 i X X The choice for T is only for simplicity of notation. All the results obtained n r a for T in the paper hold for the standard t statistic √n 1X¯/V as well. n − The aim here is to find the limit of the ratio of tail probabilities for T , n specifically, the limit of P √n(X¯ +d/n) x V n ≥ , as n , P √nX¯ x V → ∞ (cid:0) n (cid:1) ≥ where d > 0 is a const(cid:0)ant and x (cid:1) in a suitable rate. The problem n → ∞ pertains to large deviations for self-normalized random variables [5, 9]. On ∗ ResearchpartiallysupportedbyNSFgrantDMS-0706048andNIHgrantDC007206- 01. AMS 2000 subject classifications: Primary 60F10; secondary 62H15 Keywords and phrases: self-normalize, large deviations, tail probability, multiple hy- pothesis testing 1 2 Z. CHI theotherhand,itisdirectlyrelatedtostatisticalmultiplehypothesistesting, in particular, the False Discovery Rate (FDR) control [1], which in recent years has generated intensive research due to its applications in microarray data analysis, medical imagery, etc, where a very large number of signals (“null hypotheses”) have to be sorted through in order to identify signals of interest(“falsenulls”)fromtheother,noisesignals(“truenulls”)[6,7,8,10]. A measure of performance for multiple testing is the fraction of falsely identified noisesignals (“false discoveries”) amongtheidentified ones.Given that at least one signal is identified, the fraction is a well-defined random variable and its conditional expectation is called positive FDR, or pFDR. For a testing procedure, it is desirable that, given a target control level α, the procedure attains pFDR α. However, whether or not this is possible ≤ depends on the property of the data distributions as well as how much data is available to assess the hypotheses. We consider a typical multiple testing problem,wherethedata distributionsareshiftedandscaled versions of each other. Suppose the data distributions are F (x) = F(s x u ), where F is a i i i − fixed distribution, and s > 0 and u are unknown. In order to identify from i i F those with u = 0, we test null (hypotheses) H : u = 0 to see which i i i i 6 one can be rejected. To this end, let n iid observations be sampled from F , i which can be written as Y = (X +u )/s , ..., Y = (X +u )/s , with i1 i1 i i in in i i X F. Suppose the nulls are tested independently of each other, so that ij ∼ X are iid for i 1, j = 1,...,n. Typically, H is rejected if and only if the ij i ≥ t statistic of Y ,...,Y is larger than a cut-off value x . Suppose that false i1 in n nulls occur randomly in the population of nulls, such that each H can be i false with probability p (0,1) independently of the others, and u = u > 0 i ∈ when H is false. By definition, a falsely rejected null is a true null, i.e., i u = 0. It is then not hard to see i 1 p (1.1) P(H is falsely rejected H is rejected)= − , i i | 1 p+pR n − where R is the ratio of tail probabilities n P(√n(X¯ +u) x V) n R (u) = ≥ . n P(√nX¯ x V) n ≥ It follows that the minimum attainable pFDR is equal to the right hand side of (1.1) as well [4]. Consequently, if real signals are weak in the sense that u 0, then R can be close to 1, implying that when a nonempty set n ≈ LIKELIHOODRATIOOF SELF-NORMALIZEDLDP 3 of nulls are rejected by whatever multiple testing procedure, it is likely that most or almost all of them are falsely rejected. For the t test, the only way to address the above limitation on the error rate control is to increase n, the number of observations for each null. From (1.1), in order to attain pFDR α, n must satisfy ≤ (1.2) R (u) (1/p 1)(1/α 1). n ≥ − − An important question is, as u 0, what would be the minimum n in order ≈ for (1.2) to hold. The issue of sample size for pFDR control was previously studied in [3]. However, in that work the t statistic was defined in a different way, with X¯ and V derived from two independentsamples instead of from the same sam- ple. Although that definition allows an easier treatment, it is not commonly used in practice. Furthermore, the asymptotic result in [3] is different from the one reported here for the more commonly used t statistic. 1.2. Main results. We need to be more specific about the cut-off value x . Usually, as n increases, one can afford to look at more extreme tails to n get stronger evidence against nulls. This suggests there should be x n → ∞ as n . If EX > 0 and EX2 < for X F, then x should be at n → ∞ ∞ ∼ leastofthesameorderas√n,otherwiseinfpFDR 1,wheretheinfimumis → takenoverallpossiblemultipletestingproceduresthataresolelybasedonT . i Furthermore,for F = N(0,1), it is knownthat thereshouldbex /√n n → ∞ in order to attain infpFDR [3]. Based on the considerations, for the general case, we will impose x = a √n with a as the cut-off value. n n n → ∞ Theorem 1.1. Suppose the density f satisfies the following conditions. 1) f is bounded and continuous on R and there is γ > 0, such that lim x1+γf(x) < . x ∞ →∞ 2) zf(z) has a unique maximizer z > 0. 0 3) h := logf is three times differentiable on R, such that sup h < ′′ | | ∞ and sup h < . ′′′ | | ∞ Let a , such that a4 = o(n/logn). Then for any d d (0, ), n → ∞ n n → ∈ ∞ P X¯ +d /n a V n ≥ n ed/z0, as n . P X¯ a V → → ∞ (cid:0) n (cid:1) ≥ Note that for different(cid:0)n, X¯ and(cid:1)V are different random variables. 4 Z. CHI Let k = k (u) be the minimum n in order for (1.2) to hold. The asymp- ∗ ∗ totic of k as u 0 is a consequence of Theorem 1.1. ∗ → Corollary 1.1. Suppose f and a satisfy the conditions in Theorem n 1.1. Let p (0,1) and α (0,1) be fixed in (1.2). Then ∈ ∈ k (u) (z /u)ln[(1/p 1)(1/α 1)], as u 0+. 0 ∗ ∼ − − → Manyprobabilitydensitiessatisfyconditions1)–3)ofTheorem1.1,forex- ample, Gaussian density f (x;µ,σ) = e (x µ)2/2σ2/√2πσ and Cauchy den- 1 − − sityf (x;µ,σ) = σπ 1[σ2+(x µ)2] 1.Inparticular,whenµ = 0andσ = 1, 2 − − − both have z = 1. Therefore, even though all the moments of f are finite 0 1 whereas all those of f are infinite, in terms of the amount of data needed 2 to control the pFDR, these two are asymptotically the same. On the other hand, Theorem 1.1 is not applicable to densities with zeros on R. Since the conclusion ofTheorem1.1hasnothingtodowiththecontinuity of h= logf over R, it is desirable to remove condition 3) altogether. In the rest of the paper, Section 2 proves Theorem 1.1 and Corollary 1.1. Sections 3 and 4 contain proofs of lemmas for the main results. 2. Proof of main results. A key to the proof is the fact that the analysis can be localized at z , which is revealed by a representation of the 0 event T √na given by Shao [9]. It is easily seen that for t >0, n n { ≥ } S n 1/2 n T t = t , { n ≥ } (Qn ≥ (cid:18)n+t2(cid:19) ) where Q = X2+ +X2, n 1 ··· n q (cf. [9]). If t = √na , then, letting r = 1 (1+a 2) 1/2 and following [9], n − −n − S T √na = n 1 r n n ≥ Q √n ≥ − (cid:26) n (cid:27) (cid:8) (cid:9) n (1 r) = sup bX − (X2 +b2) 0 (b>0 i=1(cid:20) i− 2 i (cid:21) ≥ ) X n b2r(2 r) 1 r b 2 = sup − − X 0 i (b>0Xi=1" 2(1−r) − 2 (cid:18) − 1−r(cid:19) # ≥ ) n b2r(2 r) b 2 = sup − X 0 . (b>0Xi=1" (1−r)2 −(cid:18) i− 1−r(cid:19) #≥ ) Let z = b/(1 r) and σ = r(2 r). Then n − − p 1 n X 2 (2.1) T √na = σ2 inf i 1 . n ≥ n ( n ≥ z>0n i=1(cid:18) z − (cid:19) ) (cid:8) (cid:9) X LIKELIHOODRATIOOF SELF-NORMALIZEDLDP 5 Under the assumption of Theorem 1.1, r = 1 (1+a 2) 1/2 = a 2/2+ − −n − −n o(a 2), and hence σ2 2r = a 2+o(a 2), yielding −n n ∼ −n −n (2.2) σ 0, nσ4/logn n/(a4 logn) . n → n ∼ n → ∞ Equations (2.1) and (2.2) are the starting point of the proof. Lemma 2.1. Suppose f satisfies condition 1) and 2) in Theorem 1.1. Let σ 0 such that nσ4/logn . Then, given r > 0, there is δ =δ(r) > 0, n → n → ∞ such that 1 n X +d 2 P σ2 inf i 1 lim sup ( n ≥ z>0n Xi=1(cid:18) z − (cid:19) ) = 1. n 1 n X +d 2 →∞|d|≤δ P σ2 inf i 1 ( n ≥ |z−z0|≤r n Xi=1(cid:18) z − (cid:19) ) The lemma will be proved later. The following heuristic explains why the analysis can be localized at z . Let d = 0. For σ2 1, if the event 0 n ≪ E = σ2 (1/n) n (X /z 1)2 occurs, then most of X must fall z { n ≥ i=1 i − } i between (1 σ )z and (1+σ )z, implying n P n − logP(E ) nlogP(X z σ z) nlog(2σ zf(z)). z n n ≈ | − | ≤ ≈ As a result, given that at least one E occurs, the most likely value of z z should be the maximizer of zf(z), i.e., z . 0 The following fact will be used in the proof of Theorem 1.1. If X ,...,X 1 n are iid with density f and n 3, then the joint density of X¯ and V is ≥ n (2.3) h(t,s) = (√n)nsn 2 f(t+√nsω )µ (dω) − i n Z i=1 Y where µ is the uniform distribution on a (n 2) dimensional unit sphere n − perpendicular to (1,1,...,1) in Rn, i.e., n n U := ω Rn : ω2 = 1, ω = 0 . n ( ∈ i i ) i=1 i=1 X X For completeness, a sketch of the proof of (2.3) is given in the Appendix. Finally, recall that for any a R and random variables ξ ,...,ξ , 1 n ∈ n 1 (ξ a)2 = (ξ¯ a)2+V2, n i− − ξ i=1 X where ξ¯ is the sample mean of ξ , and V = n 1/2 n (ξ ξ¯)2 is the i ξ − i=1 i− biased sample standard deviation. q P 6 Z. CHI Proof of Theorem 1.1. Fix d 0 such that d d < . Given n n ≥ → ∞ r > 0, for n 1, d /n δ, where δ = δ(r) > 0 is as in Lemma 2.1. It n ≫ | | ≤ therefore suffices to consider the limit of 1 n X 2 P σ2 inf i,n 1 L := ( n ≥ |z−z0|≤r nXi=1(cid:18) z − (cid:19) ) n 1 n X 2 P σ2 inf i 1 ( n ≥ |z−z0|≤r nXi=1(cid:18) z − (cid:19) ) where X := X +d /n has density f(x d /n). Let i,n i n n − (t z)2+s2 Γ = (t,s) ( , ) [0, ) : σ2 inf − . n ( ∈ −∞ ∞ × ∞ n ≥ z z0 r z2 ) | − |≤ Then for any random variables ξ ,...,ξ , 1 n 1 n ξ 2 σ2 inf i 1 = (ξ¯,V ) Γ . ( n ≥ |z−z0|≤r n Xi=1(cid:18)z − (cid:19) ) (cid:8) ξ ∈ n(cid:9) Apply the above formula to X and X respectively. By (2.1) and (2.3), i,n i n sn 2 f(t d /n+√nsω )µ (dω)dtds − n i n − L = Z(t,s,ω)∈Γn×Un iY=1 n n sn 2 f(t+√nsω )µ (dω)dtds − i n Z(t,s,ω)∈Γn×Un iY=1 = ρ(t,s,ω)ν(dt,ds,dω), Z(t,s,ω)∈Γn×Un where ν(dt,ds,dω) is the probability measure on Γ U proportional to n n × sn 2 n f(t+√nsω )µ (dω)dtds, and − i=1 i n Q n f(t d /n+√nsω ) ρ(t,s,ω) = i=1 − n i . n f(t+√nsω ) Q i=1 i For each (t,s,ω) Γ U , by TaQylor expansion, n n ∈ × n ρ(t,s,ω) = exp h(t+√nsω d /n) h(t+√nsω ) i n i ( − − ) i=1 X(cid:2) (cid:3) n d = exp n h(t+√nsω )+e ′ i n (− n ) i=1 X where sup e = O(d2/n) = O(1/n) due to sup h (x) < . By (t,s,ω)| n| n x| ′′ | ∞ Taylor expansion and ω + +ω = 0, 1 n ··· n n 1 1 h(t+√nsω ) =h(t)+ h (t+θ√nsω )(√nsω )2 ′ i ′ ′′′ i i n n i=1 i=1 X X LIKELIHOODRATIOOF SELF-NORMALIZEDLDP 7 for some θ (0,1). Because ω2 add up to 1 and (t,s) Γ , ∈ i ∈ n n 1 h(t+√nsω ) h(t) sup h (x)s2 Aσ2, (cid:12)n ′ i − ′ (cid:12) ≤ x | ′′′ | ≤ n (cid:12) Xi=1 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) where A = (cid:12)(z +r)2sup h (x) < .(cid:12)For (t,s) Γ , as t z σ z for 0 x| ′′′ | ∞ ∈ n | − | ≤ n some z [z r,z +r], t z r+σ (z +r)< 2r for n 1. Combining 0 0 0 n 0 ∈ − | − | ≤ ≫ the above bounds, ρ(t,s,ω) e−dn∆(2r)−Aσn2−sup|en| ≤ e dnh′(z0) ≤edn∆(2r)+Aσn2+sup|en|, − with ∆(c) = sup h(z) h(z ). Since r is arbitrary and h is contin- uous, from the e|xz−prz0e|s≤scio|n′of L− a′nd0 d| d, it is seen that L ′ e dh′(z0) n n n − → ∼ as n . Finally, since z maximizes logz + h(z), h(z ) = 1/z . So 0 ′ 0 0 → ∞ − L ed/z0. n ∼ Proof of Corollary 1.1. First, it is necessary to show that as u → 0+, k (u) . To this end, it suffices to show that, when n and c> 0 are ∗ → ∞ fixed, then P(X¯ +u cV) ℓ(u) := ≥ 1, as u 0+, P(X¯ cV) → → ≥ where X¯ and V are defined in terms of X ,...,X . The limit follows from 1 n a corollary to Fatou’s lemma, which states that if l (x) f (x) u (x), n n n ≤ ≤ l (x) l(x), f (x) f(x) and u (x) u(x) pointwise as n , and n n n → → → → ∞ l l and u u, then f f. Specifically, let n n n → → → R R R R R 1 nR x 2 A(r) = (x ,...,x ): r2 inf i 1 , for r > 0. 1 n ( ≥ z>0n i=1(cid:18) z − (cid:19) ) X Then by (2.1), there is σ (0,1), such that ∈ n P(X¯ +u cV)= 1 x A(σ) f(x u)dx dx i 1 n ≥ { ∈ } − ··· Z i=1 Y n P(X¯ cV)= 1 x A(σ) f(x )dx dx . i 1 n ≥ { ∈ } ··· Z i=1 Y Apparently, 0 1 x A(σ) n f(x u) n f(x u), with the ≤ { ∈ } i=1 i− ≤ i=1 i− right hand side having the sameQintegral as ni=1fQ(xi). Since f(x−u) → f(x) pointwise as u 0, the above corollary to Fatou’s lemma implies Q → P(X¯ +u cV) P(X¯ cV) > 0. Then ℓ(u) 1. ≥ → ≥ → 8 Z. CHI Next, we show that uk (u) is bounded from as u 0+. Suppose that ∗ ∞ → there is a sequence u such that u k (u ) . Clearly, n := k (u ) . i i i i i ∗ → ∞ ∗ → ∞ Then, given any M, u n M for i 1 and hence by Theorem 1.1, i i ≥ ≫ P(X¯ +u a V) P(X¯ +M/n a V) i ≥ ni i ≥ ni P(X¯ a V) ≥ P(X¯ a V) ≥ ni ≥ ni eM/z0 (1/p 1)(1/α 1), → ≫ − − which contradicts the definition of k (u ). i ∗ It only remains to show that uk (u) d := z ln[(1/p 1)(1/α 1)] 0 0 ∗ → − − as u 0. It suffices to show that for any sequence u 0 with convergent i → → u k (u ),thelimitofu k (u )isd .Indeed,letthelimitbed.Then,following i i i i 0 ∗ ∗ the above argument, ed/z0 = (1/p 1)(1/α 1), giving d= d . 0 − − 3. Proof of Lemma 2.1. Lemma 3.1. Let σ (0,1), η > 0 and s >0. Then ∈ 1 n X 2 inf i 1 σ2 (s≤z≤(1+ησ)s n Xi=1(cid:18) z − (cid:19) ≤ ) 1 n X 2 i 1 (1+ησ)2(1+η)2σ2 . ⊂ (n i=1(cid:18) s − (cid:19) ≤ ) X Proof. Suppose (1/n) n (X /z 1)2 σ2 for some z [s,(1+ησ)s]. i=1 i − ≤ ∈ Then X¯/z 1 σ. By 0 1 s/z ησ and z2/s2 (1+ησ)2, P | − | ≤ ≤ − ≤ ≤ 1 n X 2 1 n X¯ 2 i 1 = (X X¯)2+ 1 n i=1(cid:18) s − (cid:19) ns2 i=1 i− s − ! X X z2 1 n X¯ s 2 = (X X¯)2+ s2 nz2 i− z − z! i=1 X z2 1 n X 2 X¯ s s 2 i 1 +2 1 1 + 1 , ≤ s2 "n Xi=1(cid:18) z − (cid:19) (cid:12)(cid:12) z − (cid:12)(cid:12)(cid:18) − z(cid:19) (cid:18)z − (cid:19) # (cid:12) (cid:12) with the last expression no greater t(cid:12)han (1(cid:12)+ησ)2(1+η)2σ2. (cid:12) (cid:12) In the next, let X ,X ,... be iid random variables with density f. 1 2 Lemma3.2. Supposelim x1+γf(x)< forsomeγ > 0.Letσ 0 x n →∞ ∞ → such that lim nσ > 0. Then, given T >0 and δ > 0, there is a= a(T,δ) > n n 0, such that for n 1, ≫ 1 1 n X +d 2 sup logP σ2 inf i 1 logσ T. |d|≤δ n ( n ≥ z≥an Xi=1(cid:18) z − (cid:19) ) ≤ n− LIKELIHOODRATIOOF SELF-NORMALIZEDLDP 9 Proof. We first show that there is a = a(T)> 0, such that 1 1 n X 2 (3.1) logP σ2 inf i 1 logσ T. n ( n ≥ z≥an i=1(cid:18) z − (cid:19) ) ≤ n − X Fix η (0,1) with η > (1+η/8)2(1+η/4)2 1. Let α = 1+ησ /4. For n n ∈ − n 1 with σ < 1/2, α < 1+η/8, so by Lemma 3.1, for any a> 0, n n ≥ 1 n X 2 P σ2 inf i 1 ( n ≥ z≥an i=1(cid:18) z − (cid:19) ) X ∞ P σ2 inf 1 n Xi 1 2 ≤ jX=0 ( n ≥ aαjn≤z≤aαjn+1 n Xi=1(cid:18) z − (cid:19) ) (3.2) ∞ P (1+η)σ2 1 n Xi 1 2 . ≤ j=0 ( n ≥ n i=1(cid:18)aαjn − (cid:19) ) X X Let s = (1+η)σ2. By Chernoff’s inequality, for z > 0 and t > 0, n 1 n X 2 n (3.3) P s i 1 z ets tu2f(z+zu)du . − ( ≥ n i=1(cid:18) z − (cid:19) ) ≤ (cid:20) Z (cid:21) X FixA> lim x1+γf(x).LetM(z) = (γ/2)logz andt = M(z)/s.Then x →∞ A η z ets tu2f(z+zu)du eM(z) M(z)u2/sdu − − ≤ zγ(1 η)1+γ Z − Z−η +z eM(z) M(z)η2/sf(zu+u)du − Z|u|≥η AeM(z) πs +eM(z) M(z)η2/s. − ≤ zγ(1 η)1+γ M(z) s − I2 I1 | {z } | {z } Since eM(z) = zγ/2 and √s = (1+η)σ , for z 1, I Az γ/2σ /2. n 1 − n ≫ ≤ On the other hand, z 1 and σnp 1, the following (in)equalities hold ≫ ≪ η2 I2 = zγ(1−η2/s)/2 z−γ/2z−3(1+η)σn2 Az−γ/2σn/2, ≤ ≤ so I +I Az γ/2σ . Then by (3.2)and (3.3), 1 2 − n ≤ P σ2 inf 1 n Xi 1 2 ∞ A(aαj) γ/2σ n ( n ≥ z≥an Xi=1(cid:18) z − (cid:19) ) ≤ jX=0h n − ni (Aa γ/2σ )n − n = 1 (1+ησ /4) γn/2 n − − 10 Z. CHI Since σ 0 and lim nσ > 0, there is K > 0 such that for all n 1, n n n → ≫ 1 (1+ησ /4) γn/2 1 e ηγnσn/9 > 1/K. Thus n − − − ≥ − 1 1 n X 2 logK logP σ2 inf i 1 logσ +log(Aa γ/2)+ . n ( n ≥ z≥an i=1(cid:18) z − (cid:19) ) ≤ n − n X Since A and K are fixed independently of a, by choosing a = a(T) large enough, (3.1) is proved. Finally, for d [ δ,δ] and z a, ∈ − ≥ X +d 2 (z d)2 X 2 (a δ)2 X 2 i i i 1 = − 1 − 1 , z − z2 z d − ≥ a2 z d − (cid:18) (cid:19) (cid:18) − (cid:19) (cid:18) − (cid:19) Therefore, 1 1 n X +d 2 sup logP σ2 inf i 1 |d|≤δ n ( n ≥ z≥an Xi=1(cid:18) z − (cid:19) ) 1 a2σ2 1 n X 2 logP n inf i 1 . ≤ n ((a−δ)2 ≥ z≥a−δ n Xi=1(cid:18) z − (cid:19) ) Then the lemma follows from (3.1). Lemma 3.3. Supposef is bounded.Let σ 0 such that lim nσ > 0. n n n → Then, given T > 0, there is b = b(T)> 0, such that for n 1, ≫ 1 1 n X +d 2 sup logP σ2 inf i 1 logσ T. d∈R n ( n ≥ 0<z≤bn Xi=1(cid:18) z − (cid:19) ) ≤ n− Proof. Given η > 0 such that η > (1+η/8)2(1+η/4)2 1, by the same − argument for (3.2), for b > 0, d R and n 1 with σ < 1/2, letting n ∈ ≥ α = 1+ησ /4, n n 1 n X +d 2 P σ2 inf i 1 ( n ≥ z≤bn Xi=1(cid:18) z − (cid:19) ) ∞ P σ2 inf 1 n Xi+d 1 2 ≤ jX=0 ( n ≥ bαn−j−1≤z≤bα−nj n Xi=1(cid:18) z − (cid:19) ) ∞ P (1+η)σ2 1 n αjn(Xi+d) 1 2 . ≤ n ≥ n b − ! jX=0 Xi=1 Denote A= supf. For any s > 0, z > 0 and d R, ∈ e (x/z 1)2/sf(x d)dx A e (x/z 1)2/sdx = Az√πs. − − − − − ≤ Z Z