Empirical phi-divergence test-statistics for the equality of means of two populations N. Balakrishnan1, N. Mart´ın2 and L. Pardo3 1 Department of Mathematics and Statistics, McMaster University,Hamilton, Canada 6 1 2Department of Statistics and O.R. II (Decision Methods), Complutense Universityof Madrid, 28003 Madrid, Spain 0 2 3 Departmentof Statistics and O.R., Complutense University of Madrid, 28040 Madrid, Spain n a January 15, 2016 J 4 1 Abstract ] E Empirical phi-divergence test-statistics have demostrated to be a useful technique for the simple null M hypothesis to improve the finite sample behaviour of the classical likelihood ratio test-statistic, as well as . t a for model misspecification problems, in both cases for the one population problem. This paper introduces t s thismethodology for two sample problems. Asimulation studyillustrates situations in which thenewtest- [ statisticsbecomeacompetitivetoolwithrespecttotheclassicalz-testandthelikelihoodratiotest-statistic. 1 v 0 4 6 AMS 2001 Subject Classification: 62F03, 62F25. 3 0 Keywords and phrases: Empirical likelihood, Empirical phi-divergence test statistics, Phi-divergence mea- . 1 sures, Power function. 0 6 1 : 1 Introduction v i X The method of likelihood introduced by Fisher is certainly one of the most commonly used techniques for r a parametric models. The likelihood has been also shown to be very useful in non-parametric context. More concretely Owen (1988, 1990, 1991) introduced the empirical likelihood ratio statistics for non-parametric problems. Two sample problems are frequently encountered in many areas of statistics, generally performed under the assumption of normality. The most commonly used test in this connection is the two sample t- test for the equality of means, performed under the assumption of equality of variances. If the variances are unknown, we have the so-called Behrens-Fisher problem. It is well-known that the two sample t-test has cone majordrawback;itishighlysensitivetodeviationsfromtheidealconditions,andmayperformmiserablyunder model misspecification and the presence of outliers. Recently Basu et al. (2014) presented a new family of test statistics to overcome the problem of non-robustness of the t-statistic. 1 Empiricallikelihoodmethodsfortwo-sampleproblemshavebeenstudiedbydifferentresearcherssinceOwen (1988)introducedtheempiricallikelihoodasanon-parametriclikelihood-basedalternativeapproachtoinference onthemeanofasinglepopulation. ThemonographofOwen(2001)isanexcellentoverviewofdevelopmentson empirical likelihood and considers a multi-sample empirical likelihood theorem, which includes the two-sample problemasaspecialcase. Some importantcontributionsfor the two-sampleproblemaregiveninOwen(1991), Adimiri (1995), Jin (1995), Qin (1994, 1998), Qin and Zhao (2000), Zhang (2000), Liu et al. (2008), Baklizi and Kibria (2009), Wu and Yan (2012) and references therein. Consider two independent unidimensional randomvariablesX with unknownmean µ and variance σ2 and 1 1 Y with unknown mean µ and variance σ2. Let X ,...,X be a random sample of size m from the population 2 2 1 m denoted by X, with distribution function F, and Y ,...,Y be a random sample of size n from the population 1 n denoted by Y, with distribution function G. We shall assume that F and G are unknown, therefore we are interested in a non-parametric approach, more concretely we shall use empirical likelihood methods. If we denote µ =µ and µ =µ+δ, our interest will be in testing 1 2 H : δ =δ vs. H : δ =δ , (1) 0 0 1 0 6 being δ a knownrealnumber. Since δ =µ µ becomes the parameterof interest, apartfrom testing (1), we 0 2 1 − might also be interested in constructing the confidence interval for δ. In this paper we are going to introduce a new family of empirical test statistics for the two-sample problem introduced in (1): Empirical phi-divergence test statistics. This family of test statistics is based on phi- divergence measures and it contains the empirical log-likelihoodratio test statistic as a particular case. In this sense,wecanthinkthatthefamilyofempiricalphi-divergenceteststatisticspresentedandstudiedinthispaper is a generalization of the empirical log-likelihood ratio statistic. Let N =m+n, assume that m ν (0,1), (2) N m,−n→→∞ ∈ and x ,...,x , y ,...,y a realization of X ,...,X , Y ,...,Y . We denote 1 m 1 n 1 m 1 n m n m m n n (δ)= p q s.t. p (x µ)=0, p =1, q (y µ δ)=0, q =1, i j i i i j j j L − − − i=1j=1 i=1 i=1 j=1 j=1 Q Q P P P P and m n m n (p,q)= p q s.t. p = q =1, p ,q 0, i j i j i j L ≥ i=1j=1 i=1 j=1 Q Q P P with p =p (µ)=F (x ) F x− , q =q (µ,δ)=G(y ) G(y−) and p=(p ,...,p )T, q =(q ,...,q )T. i i i − i j j j − j 1 m 1 n The empirical log-likelihoo(cid:0)d ra(cid:1)tio statistic for testing (1) is given by sup (δ ) ℓ(δ )= 2log p,qL 0 . (3) 0 − sup (p,q) p,qL Using the standard Lagrange multiplier method we might obtain sup (δ ), as well as sup (p,q). For p,qL 0 p,qL sup (δ ), taking derivatives on p,qL 0 m n m n m n = logp + logq +s 1 p +s 1 q λ m p (x µ) λ n q (y µ δ ), 1 i j 1 i 2 j 1 i i 2 j j 0 L i=1 j=1 (cid:18) −i=1 (cid:19) −j=1 !− i=1 − − j=1 − − P P P P P P 2 we obtain ∂ 1 1 1 L =0 p = , i=1,...,m, (4) i ∂p ⇔ m1+λ (x µ) i 1 i − ∂ 1 1 1 L =0 q = , j =1,...,n, (5) j ∂q ⇔ n1+λ (y µ δ ) j 2 j 0 − − and ∂ 1 L =0 mλ +nλ =0. 1 2 ∂µ ⇔ Therefore, the empirical maximum likelihood estimates λ , λ and µ of λ , λ and µ, under H , are obtained 1 2 1 2 0 as the solution of the equations 1 m e e e 1 =1 mn1ji=Pn=1111++λλ21((yxj1i−−µµ−)δ0) =1 , (6) P mλ +nλ =0 1 2 and m n logsup (δ )= mlogm log 1+λ (x µ) nlogn log 1+λ (y µ δ ) . (7) 0 1 i 2 j 0 p,q L − −i=1 − − −j=1 − − (cid:16) (cid:17) (cid:16) (cid:17) P P In relation sup (p,q), taking derivatives on e e e e p,qL m n m n = logp + logq +s 1 p +s 1 q , (8) 2 i j 1 i 2 j L i=1 j=1 (cid:18) −i=1 (cid:19) −j=1 ! P P P P we have ∂ 1 ∂ 1 2 2 L =0 p = , i=1,...,m and L =0 q = , j =1,...,n, (9) i j ∂p ⇔ m ∂q ⇔ n i j and logsup (p,q)= mlogm nlogn. (10) p,q L − − Therefore, the empirical log-likelihood ratio statistic (3), for testing (1), can be written as m ℓ(δ )= 2 mlogm log 1+λ (X µ) nlogn 0 1 i − − − − − (cid:18) i=1 (cid:16) (cid:17) P n e e log 1+λ (Y µ δ ) +mlogm+nlogn 2 j 0 −j=1 − − ! (cid:16) (cid:17) P m e e n =2 log 1+λ (X µ) + log 1+λ (Y µ δ ) . (11) 1 i 2 j 0 (i=1 − j=1 − − ) (cid:16) (cid:17) (cid:16) (cid:17) P P e e e e Under some regularity conditions, Jing (1995) established that Pr ℓ(δ )>χ2 =α+O(n−1), 0 1,α (cid:0) (cid:1) where χ2 is the 100(1 α)-th percentile of the χ2 distribution. 1,α − 1 Our interest in this paper is to study the problem of testing given in (1) and at the same time to construct confidenceintervalsforδonthebasisoftheempiricalphi-divergenceteststatistics. Empiricalphi-divergencetest statistics in the context of the empirical likelihood have studied by Baggerly (1998), Broniatowski and Keizou 3 (2012), Balakhrishnan et al. (2013), Felipe et al. (2015) and references therein. The family of empirical phi- divergenceteststatistics,consideredinthispaper,containstheclassicalempiricallog-likelihoodratiostatisticas aparticularcase. InSection2,theempiricalphi-divergenceteststatisticsareintroducedandthecorresponding asymptotic distributions are obtained. A simulation study is carried out in Section 3. Section 4 is devoted to developanumericalexample. InSection5thepreviousresults,devotedtounivariatepopulations,areextended to k-dimensional populations. 2 Empirical phi-divergence test statistics Forthe hypothesis testing consideredin (1), in this sectionthe family ofempiricalphi-divergencetest statistics are introduced as a natural extension of the empirical log-likelihood ratio statistic given in (3). We consider the N-dimensional probability vectors N U =(1,.⌣.., 1)T (12) N N and P =(p ν,...,p ν,q (1 ν),...,q (1 ν))T (13) 1 m 1 n − − where p , i = 1,...,m, q , j = 1,...,n were defined in (4) and (5), respectively, and ν in (2). Let P be the i j N-dimensionalvectorobtainedfromP withp ,q replacedbythecorrespondingempiricalmaximumlikelihood i j e estimators p , q and ν by m. The Kullback-Leibler divergence between the probability vectors U and P is i j N given by e e e m 1 1 n 1 1 D (U,P)= log N + log N Kullback N p m N q 1 m i=1 iN j=1 j − N P P e 1 m n (cid:0) (cid:1) = elogmp+ logneqj −N (i=1 j=1 ) P P 1 m e e n = log 1+λ (x µ) + log 1+λ (y µ δ ) , 1 i 2 j 0 N (i=1 − j=1 − − ) (cid:16) (cid:17) (cid:16) (cid:17) P P where e e e e 1 1 p = , i=1,...,m, (14) i m1+λ (x µ) 1 i − 1 1 qe = , j =1,...,n. (15) j n1+λe (y µe δ ) 2 j 0 − − Therefore, the relationship betweeneℓ(δ ) and D (U,P) is 0 e Kullbacke ℓ(δ )=2ND e(U,P). (16) 0 Kullback Basedon (16), in this paper the empiricalphi-divergencetest statisetics for (1) are introducedfor the first time. This family of empirical phi-divergence test statistics is obtained replacing the Kullback-Leibler divergence by a phi-divergence measure in (16), i.e., 2N T (δ )= D (U,P), (17) φ 0 φ′′(1) φ e 4 where m m 1 n m 1 D (U,P)= p φ N + 1 q φ N , φ i=1 iN (cid:18)piNm(cid:19) j=1(cid:16) − N(cid:17) j 1− Nm qj! P P withφ:R+ Rbeinganycoenvexfunectionsuchthatatx=1,φ(1)e=0,φ(cid:0)′′(1)>(cid:1)0andatx=0,0φ(0/0)=0 −→ e e and 0φ(p/0) = plim φ(u). For more details see Cressie and Pardo (2002) and Pardo (2006). Therefore, u→∞ u (17) can be rewritten as 2 m 1 n 1 T (δ )= mp φ + nq φ (18) φ 0 φ′′(1) i mp j nq (i=1 (cid:18) i(cid:19) j=1 (cid:18) j(cid:19)) P P 2 m e 1 e n 1 = e φ 1+λ (x e µ) + φ 1+λ (y (µ+δ )) . φ′′(1)(i=11+λ1(xi µ) 1 i− j=11+λ2(yj µ δ0) 2 j − 0 ) − (cid:16) (cid:17) − − (cid:16) (cid:17) P P e e e e If φ(x) = xlogx x+1 eis chosenein Dφ(U,P), we get the Kullbeack-Leibeler divergence and Tφ(δ0) coincides − with the empirical log-likelihood ratio statistic ℓ(δ ) given in (16). e 0 Let µ(σ2,σ2) be the optimal estimator of µ under the assumption of having the knownvalues of σ2, σ2, i.e. 1 2 1 2 it is given by the shape πX +(1 π)Y and has minimum variance. It is well-known that b − mX Y δ 0 +n − σ2 σ2 µ(σ2,σ2)= 1 (cid:0) 2 (cid:1). (19) 1 2 m n + σ2 σ2 1 2 b Similarly, an asymptotically optimal estimator of µ having unknown values of σ2, σ2, is given by 1 2 mX Y δ 0 +n − S2 S2 µ(S2,S2)= 1 (cid:0) 1 (cid:1), 1 2 m n + S2 S2 1 1 b where S2 = 1 m (X X)2, S2 = 1 n (Y Y)2 are consistent estimators of σ2, σ2 respectively. 1 m−1 i=1 i − 2 n−1 j=1 j − 1 2 In the following lePmma an important relationPship is established, useful to get the asymptotic distribution of T (δ ). φ 0 Lemma 1 Let µ the empirical likelihood estimator of µ. Then, we have e µ=µ(σ2,σ2)+O (1)=µ(S2,S2)+O (1). 1 2 p 1 2 p Proof. See Appendix 5.3. e b b Theorem 2 Suppose that σ2 < , σ2 < and (2). Then, 1 ∞ 2 ∞ T (δ ) L χ2. φ 0 n,m→→∞ 1 Proof. See Appendix 5.4. Remark 3 A (1 α)-level confidence interval on δ can be constructed as − CI (δ)= δ :T (δ) χ2 . 1−α φ ≤ 1,α (cid:8) (cid:9) 5 The lower and upper bounds of the interval CI (δ) require a bisection search algorithm. This is a com- 1−α putationally challenging task, because for every selected grid point on δ, one needs to maximize the empirical phi-divergence T (δ)over thenuisanceparameter, µ, andthereis noclosed-form solution tothemaximumpoint φ µfor anygiven δ. Thecomputational difficulties under thestandard two-sampleempirical likelihood formulation are due to the fact that the involved Lagrange multipliers, which are determined through the set of equations (6), b have to be computed based on two separate samples with an added nuisance parameter µ. Such difficulties can be avoided through an alternative formulation of the empirical likelihood function, for which computation proce- dures are virtually identical to those for one-sample of size N =m+n empirical likelihood problems. Through the transformations T x v = 1 ω , i δ , i=1,...,m, i 1 − ω − (cid:18) 1 (cid:19) y T w = ω , j δ , j =1,...,n, j 1 − ω − (cid:18) 2 (cid:19) 1 ω =ω = , 1 1 2 (14) and (15) can be alternatively obtained as 1 1 p = , i=1,...,m, (20) i m T 1+λ v ∗ i 1 1 e q = , j =1,...,n, (21) j n eT 1+λ w ∗ j e where the estimates of the Lagrange multipliers λ =(λ ,λ )T are the solution in λ of ∗ e 1,∗ 2,∗ ∗ m v n w i e + e e j =0 . i=11+λT∗vi j=11+λT∗wj 2 P P Remark 4 In the particular case that m = n, the two samples might be understood as a random sample of size n from a unique bidimensional population. In this setting the two sample problem can be considered to be a particular case of Balakrishnan et al. (2015). Remark 5 Fu et al. (2009), Yan (2010) and Wu and Yan (2012) pointed out that empirical log-likelihood ratio statistic, ℓ(δ ), given in (11) for testing (1), does not perform well when the distribution associated to the 0 samples are quite skewed or samples sizes are not large or sample sizes from each population are quite different. To overcome this problem Fu et al. (2009) considered the weighted empirical log-likelihood function defined by ω m ω n ℓ (p,q)= 1 logp + 2 logq , (22) w i j m n i=1 j=1 P P with ω = ω = 1, and obtained the weighted empirical likelihood (WEL) estimator as well as the weighted 1 2 2 empirical log-likelihood ratio statistic. In order to get the WEL estimator, it is necessary to maximize (22) subject to m n p = q =1, (23) i j i=1 j=1 Pm Pn p x y q =δ . (24) i i j j 0 − i=1 j=1 P P 6 They obtained that the WEL estimates of p and q are given by i j 1 1 wp = , i=1,...,m, i m T 1+ wλ v i 1 1 wqe = , j =1,...,n, j n eT 1+ wλ w j e where v and w are the same transformations given in Remark 3 with δ =δ and the estimates of the Lagrange i j e 0 multipliers wλ =(wλ , wλ )T are the solution in wλ of ∗ 1,∗ 2,∗ ∗ ω m v ω n w e e e 1 i 2 j + =0 . mi=11+ wλT∗vi n j=11+ wλT∗wj 2 P P Now, if we define the probability vectors m n T wU = ω (1,.⌣.., 1),ω (1,,.⌣..,, 1) , 1 m m 2 n n wP = (cid:16)ω pT,ω qT T =(ω (p ,...,(cid:17)p ),ω (q ,...,q ))T , 1 2 1 1 m 2 1 n (cid:0) (cid:1) the weighted empirical log-likelihood ratio test ℓ (δ ) presented in Wu and Yan (2012) can be written as w 0 2ℓ (δ )=D (wU,wP). (25) w 0 Kullback − The weighted empirical log-likelihood ratio test can be extended by defining the family of weighted empirical phi-divergence test statistics as 2D (wU,wP) φ S (δ )= , φ 0 φ′′(1) where D (wU,wP) is the phi-divergence measure between the probability vectors wU and wP, i.e., φ m 1 n 1 D (wU,wP)=ω p φ +ω q φ φ 1 i 2 j mp nq i=1 (cid:18) i(cid:19) j=1 (cid:18) j(cid:19) P T P T eφ 1+ wλ v e φ 1+ wλ w ω1 m e ∗ i ω2 n e ∗ j = + . m (cid:16) T (cid:17) n (cid:16) T (cid:17) i=1 1+ wλe∗vi j=1 1+ wλe∗wj P P Taking into account e e T D (wU,wP)= wλ D wλ +o (N), φ ∗ ∗ p where e e ω m ω n D = 1 v vT + 2 w wT, m i i n j j i=1 j=1 P P and based on Theorem 2.2. in Wu and Yan (2012), we have that Sφ(δ0) L χ2, c n,−m→→∞ 1 where c is the second diagonal element of the matrix D−1. 7 3 Simulation Study The square of the classical z-test statistic for two sample problems, 2 X Y +δ 0 t(δ )= − , 0 1S2+ 1S2 (cid:0)m 1 n 2(cid:1) hasasymptoticallyχ2distribution,thesameastheempiricalphi-divergenceteststatistics,accordingtoTheorem 1 2. In order to compare the finite sample performance of the confidence interval (CI) of δ based on T (δ) φ with respect to the ones based on t(δ) as well as the empirical log-likelihood ratio test-statistic ℓ(δ) given in (3), we count on a subfamily of phi-divergence measures, the so-called power divergence measures φ (x) = γ x1+γ−x−γ(x−1), dependent of tuning parameter γ R, i.e. γ(1+γ) ∈ m n 2 (mp )−γ + (nq )−γ N , γ R 0, 1 , γ(γ+1) i j − ∈ −{ − } Tγ(δ)= −2mlogmXi=1+nleogn+mXj=Xi=m11leogpi+nXj=n1logqj, γ =0, m e n e where p and q canbe2obtmailnoegdmfr+omnl(o2g0n)-+(21m).Xi=1Wpeeiloagnapeliy+zendXj=fiv1qeejnloegwqejtest,-stγat=ist−ic1s,, the empirical power- i j divergence test statistics taking γ 1, 0.5,2,1,2 . The case of γ = 0 is not new, since the empirical e e ∈ {− − 3 } log-likelihood ratio test-statistic ℓ(δ) is a member of the empirical power-divergence test statistics, i.e. ℓ(δ)= T (δ). The CI of δ based on t(δ ) with 100(1 α)% confidence level is essentially the CI of z-test statistic, γ=0 0 − (δL,δU)=(x−y−zα2 m1s21+ n1s22,x−y+zα2 m1s21+ n1s22). For T ∈{Tγ(δ)}γ∈Λ, Λ={−1,−0.5,0,23,1,2}, as mentioned in Remaqrk 3, since there is no exqplicit expression for (δ ,δ ) the bisection method should be e e L U followed. The simulated coverage probabilities of the CI of δ based on T t(δ) T (δ) were obtained e ∈e{ }∪{ γ }γ∈Λ with R=15,000 replications by R 1 100 I(T(r) χ2 ), × R ≤ 1,α r=1 X with I() being the indicator function. The simulated expected width of the CI of δ based on T t(δ) · ∈ { }∪ T (δ) were obtained with R=3,000 replications by γ γ∈Λ { } R 1 100 (δ(r) δ(r)). × R U − L r=1 X e e The reasonwhy two different values of R were followed is twofold. On one hand calculating δ(r) δ(r) is much U − L more time consuming than I(T(r) χ2 ) and on the other hand for the designed simulation experiment the ≤ 1,α e e replications needed to obtain a good precision is less for the expected width than for the coverageprobability. ThesimulationexperimentisdesignedinasimilarmannerasinWuandYan(2012). Thetruedistributions, unknown in practice, are generated from: i) X (µ,σ2), Y (µ+δ ,σ2), with µ=1, σ2 =σ2 =1.5, δ =0; ∼N 1 ∼N 0 2 1 2 0 8 ii) X lognormal(ϑ ,θ ), Y lognormal(ϑ ,θ ), with ϑ =1.1, θ =0.4, ϑ =1.2, θ =0.2. 1 1 2 2 1 1 2 2 ∼ ∼ Notice that in case ii) δ = 0 since E[X] = E[Y]. Depending on the sample sizes, six scenarios were 0 considered, (m,n) (15,30),(30,15),(30,30),(30,60),(60,30),(60,60) . Table 1 summarizes the results of ∈ { } the described simulation experiment with α = 0.05. In all the cases and scenarios the narrower width is obtained with T (δ), but the coverage probabilities closest to 95% depends on the case or scenario. For γ=−1 the case of the lognormaldistribution the CI based ont(δ) test-statistic has the closest coverageprobabilityto 95%, but for the case of the normal distribution T (δ) and T (δ) power divergence based tend to have γ=2/3 γ=1 the closest coverage probability to 95%. In order to complement this study, the power functions have been drawn through R = 15,000 replications and taking δ as abscissa. For case i) the power functions exhibit a symmetric shape with respect to the center andalsoaparallelshape,insuchawaythatthe teststatisticswithbetter approximationofthe sizehaveworse power. For case ii), fixing the values of the two parameters of X and changing the two parameter of Y as log(δ+exp ϑ + 1θ ) ϑ′ =kϑ′, θ′ =kθ , k= { 1 2 1} , 1 1 1 1 ϑ + 1θ 2 2 2 δ is displaced from δ = 0 to the right when k > 1 and from δ = 0 to the left when 0 < k < 1 (δ > 0 0 exp ϑ +1θ ). Unlikecasei),thepowerfunctionofcaseii)exhibitsadifferentshapeonbothsidesfromthe − { 1 2 1} centerofabscissa,andthe mostprominentdifferences areonthe lefthandsize. Clearlyincaseii), eventhough the approximated size for t(δ ) is the best one, it has the worst approximated power function, in particular 0 there is an area of the approximated power function on the left hand side of δ = 0 with smaller value than 0 the approximated size. Hence, in case ii) the power functions of T T (δ) are more acceptable than the γ γ∈Λ ∈{ } power function of t(δ ). Taking into account the strong and weak point of t(δ ) in case ii), T (δ ) could 0 0 γ=2/3 0 be a good choice for moderate sample sizes and ℓ(δ )=T (δ ) for small sample sizes. 0 γ=0 0 9 case i): normal populations case ii): lognormal populations m n CI coverage width m n CI coverage width 15 30 T (δ) 92.1 1.56 15 30 T (δ) 90.0 2.60 γ=−1 γ=−1 15 30 T (δ) 92.6 1.60 15 30 T (δ) 90.6 2.66 γ=−0.5 γ=−0.5 15 30 T (δ) 93.0 1.64 15 30 T (δ) 91.0 2.77 γ=0 γ=0 15 30 T (δ) 93.2 1.65 15 30 T (δ) 90.8 2.82 γ=2/3 γ=2/3 15 30 T (δ) 93.2 1.65 15 30 T (δ) 90.6 2.87 γ=1 γ=1 15 30 T (δ) 92.6 1.63 15 30 T (δ) 89.2 2.86 γ=2 γ=2 15 30 t(δ) 93.5 1.66 15 30 t(δ) 92.3 2.76 30 15 T (δ) 92.8 1.40 30 15 T (δ) 92.2 2.35 γ=−1 γ=−1 30 15 T (δ) 93.3 1.43 30 15 T (δ) 92.5 2.46 γ=−0.5 γ=−0.5 30 15 T (δ) 93.6 1.45 30 15 T (δ) 92.7 2.52 γ=0 γ=0 30 15 T (δ) 93.9 1.46 30 15 T (δ) 92.5 2.60 γ=2/3 γ=2/3 30 15 T (δ) 93.9 1.47 30 15 T (δ) 92.3 2.62 γ=1 γ=1 30 15 T (δ) 93.6 1.46 30 15 T (δ) 91.2 2.68 γ=2 γ=2 30 15 t(δ) 93.9 1.46 30 15 t(δ) 94.6 2.48 30 30 T (δ) 93.8 1.24 30 30 T (δ) 92.4 2.10 γ=−1 γ=−1 30 30 T (δ) 94.3 1.29 30 30 T (δ) 92.7 2.14 γ=−0.5 γ=−0.5 30 30 T (δ) 94.5 1.28 30 30 T (δ) 93.0 2.24 γ=0 γ=0 30 30 T (δ) 94.8 1.30 30 30 T (δ) 93.0 2.31 γ=2/3 γ=2/3 30 30 T (δ) 94.9 1.29 30 30 T (δ) 92.9 2.33 γ=1 γ=1 30 30 T (δ) 94.7 1.29 30 30 T (δ) 92.1 2.39 γ=2 γ=2 30 30 t(δ) 94.7 1.28 30 30 t(δ) 94.1 2.16 30 60 T (δ) 93.4 1.14 30 60 T (δ) 92.0 1.92 γ=−1 γ=−1 30 60 T (δ) 93.8 1.16 30 60 T (δ) 92.5 1.97 γ=−0.5 γ=−0.5 30 60 T (δ) 94.1 1.18 30 60 T (δ) 92.8 2.03 γ=0 γ=0 30 60 T (δ) 94.3 1.20 30 60 T (δ) 92.8 2.11 γ=2/3 γ=2/3 30 60 T (δ) 94.3 1.20 30 60 T (δ) 92.8 2.12 γ=1 γ=1 30 60 T (δ) 94.1 1.19 30 60 T (δ) 91.9 2.18 γ=2 γ=2 30 60 t(δ) 94.2 1.18 30 60 t(δ) 93.5 1.98 60 30 T (δ) 94.4 1.01 60 30 T (δ) 92.9 1.71 γ=−1 γ=−1 60 30 T (δ) 94.6 1.03 60 30 T (δ) 93.3 1.78 γ=−0.5 γ=−0.5 60 30 T (δ) 94.8 1.04 60 30 T (δ) 93.5 1.83 γ=0 γ=0 60 30 T (δ) 95.0 1.04 60 30 T (δ) 93.6 1.87 γ=2/3 γ=2/3 60 30 T (δ) 95.0 1.05 60 30 T (δ) 93.5 1.91 γ=1 γ=1 60 30 T (δ) 94.9 1.05 60 30 T (δ) 93.0 1.95 γ=2 γ=2 60 30 t(δ) 94.8 1.04 60 30 t(δ) 94.5 1.77 60 60 T (δ) 94.3 0.89 60 60 T (δ) 93.6 1.51 γ=−1 γ=−1 60 60 T (δ) 94.5 0.90 60 60 T (δ) 93.9 1.55 γ=−0.5 γ=−0.5 60 60 T (δ) 94.7 0.91 60 60 T (δ) 94.1 1.59 γ=0 γ=0 60 60 T (δ) 94.9 0.92 60 60 T (δ) 94.2 1.65 γ=2/3 γ=2/3 60 60 T (δ) 94.9 0.92 60 60 T (δ) 94.2 1.66 γ=1 γ=1 60 60 T (δ) 94.9 0.92 60 60 T (δ) 93.6 1.70 γ=2 γ=2 60 60 t(δ) 94.8 0.91 60 60 t(δ) 94.6 1.54 Table 1: Simulated coverageprobability and expected width of 0.95 level CIs of δ for two pupulations. 10