5 1 0 2 n On testing More IFRA Ordering-II a J 3 ] T S Muhyiddin Izadi1 Baha-Eldin Khaledi . h t a Department of Statistics, Razi University, Kermanshah, Iran m [ Chin-Diew Lai 1 v 9 Institute of Fundamental Sciences - Statistics and Bioinformatics, 9 5 0 Massey University, Palmerston North, New Zealand 0 . 1 0 5 1 : v i X r a 1Corresponding author Abstract Suppose F and G are two life distribution functions. It is said that F is more IFRA than G (written by F G) if G 1F(x) is starshaped on (0, ). In this paper, the problem of − ≤∗ ∞ testing H : F = G against H : F G and F = G is considered in both cases when 0 1 ∗ ≤∗ 6 ∗ G is known and when G is unknown. We propose a new test based on U-statistics and obtain the asymptotic distribution of the test statistics. The new test is compared with some well known tests in the literature. In addition, we apply our test to a real data set in the context of reliability. Keywords: Asymptoticnormality, starorder,increasingfailurerateaverage, Pitman’s asymptotic efficiency, U-statistic. 1 Introduction Let X be a lifetime of an appliance with density function f, distribution function F and survival function F¯. Let also denote F 1 as the right continuous inverse function of F. X − x is said to be IFRA (increasing failure rate average) if r˜ (x) = R0 rF(t)dt is nondecreasing in F x x 0 which is equivalent to that logF¯(x) is nondecreasing in x 0 where r (x) = f(x). It ≥ − x ≥ F F¯(x) is of considerable interest to producers and users of the appliances to evaluate the severity of average failure risk at a particular point of time and to see if r˜ (x) is either increasing F or decreasing in time. That is, it is of practical importance to characterize the aging class of underlying random lifetimes. In particular, since the IFRA class of aging is one of the most important aging classes, testing that the distribution F has a constant hazard rate against the hypothesis that F is IFRA has been studied extensively in the literature; see for example, Deshpande (1983), Kochar (1985), Link (1989), Ahmad (2000) and El- Bassiouny (2003) among others. In fact, F is IFRA if and only if Eλ−1F(x) is nondecreasing x in x 0 or equivalently r˜F(F−1(u)) is nondecreasing in u (0,1) where E is an exponential ≥ r˜E(Eλ−1(u)) ∈ λ distribution with mean λ. This implies that F ages faster than E, i.e., F is more IFRA than E . λ In order to evaluate the performance of an appliance, we need to compare its aging behavior with some distributions other than exponential distribution such as the Weibull, gamma, linear failure rate or even an unknown distribution G. The notion of the star order that establishes an equivalent class of distributions is one of the useful tools for this comparison. Let Y be another non-negative random variable with distribution function G. We say that X is less than Y with respect to the star order (written by X Y or ≤∗ F G ) if G 1F(x) is starshaped on [0, ); that is, G−1F(x) is nondecreasing in x 0. ≤∗ − ∞ x ≥ It is known that r˜ (F 1(u)) F − F G is nondecreasing in u (0,1), (1.1) ≤∗ ⇔ r˜G(G−1(u)) ∈ where r˜ and r˜ are failure rate average functions of F and G, respectively. Using (1.1), F G the relation X Y is interpreted as X ages faster than Y and it is said that X is more ≤∗ IFRA than Y (cf. Kochar and Xu, 2011 ). It is obvious that if F G and G F then ≤∗ ≤∗ F(x) = G(ax) for all x 0 and some a > 0. In this case, we say F = G. ≥ ∗ 1 Izadi and Khaledi (2012) have considered the problem of testing the null hypothesis H : F = G against H : F G and F = G. They proposed a test based on kernel 0 1 ∗ ≤∗ 6 ∗ density estimation. In this paper, we further study this problem of testing in the one- sample as well as the two-sample problem and propose a new simple test based on a U-statistic. In both cases, we compare the new proposed test with some well known tests in the literature. It is found that our test is comparable to the others. To establish our new test we need the following lemma. Lemma 1.1 Let X ,X (Y ,Y ) be two independent copies of the random variable X (Y) 1 2 1 2 (2) (2) with distribution function F (G) and let µ = E[max X ,X ] (µ = E[max Y ,Y ]) F { 1 2} G { 1 2} where E[.] is the expectation operator. If F is more IFRA than G, then (2) (2) µ µ F G µ ≤ µ F G where µ (µ ) is the expectation of F (G). F G Proof: We know that more IFRA order is scale invariant. Thus, X Y implies µY X ≤∗ µX ≤∗ Y. Now, the required result follows from Theorem 7.6 of Barlow and Proschan (1981, page 122). Remark 1.1 The above lemma has been proved by Xie and Lai (1996) under the condition that F is more IFR than G (for definition, see Shaked and Shantikumar, 2007, p. 214) which is stronger than more IFRA order. µ(2) µ(2) Now, let δ = F , δ = G and F µF G µG δ(F,G) = δ δ . (1.2) F G − It is obvious that if F = G, then δ(F,G) = 0 and if F G and F = G, then it follows ∗ ≤∗ 6 ∗ from Lemma 1.1 that δ(F,G) < 0. That is, δ(F,G) can be considered as a measure of departure from H : F = G in favor of H : F G and F = G. So, our test statistic is 0 1 ∗ ≤∗ 6 ∗ based on the estimation of δ(F,G). The organization of this paper is as follows. In Section 2, we propose the new test for the case when G is known. The case when G is unknown is studied in Section 3. In Section 4, the performance of our test is evaluated and compared. 2 2 The One-Sample Problem Let G be a known distribution function and X ,...,X be a random sample from an 0 1 n unknown distribution F. Now by using the measure (1.2), the test statistic ˆ ˆ δ(F,G ) = δ δ 0 F − G0 is used for testing H : F = G 0 0 ∗ against H : F G and F = G 1 0 0 ≤∗ 6 ∗ where max X ,X i j { } δˆ = Xi6=Xj (2.3) F n(n 1)X¯ − and X¯ is the mean of the random sample. In the next theorem, we obtain the asymptotic ˆ distribution of δ(F,G ) by using the standard theory of U-statistics. 0 Theorem 2.1 Suppose E[max X ,X δF(X +X )]2 < . As n , n1/2[δˆ(F,G ) { 1 2}− 2 1 2 ∞ → ∞ 0 − δ(F,G )] is asymptotically normal with mean 0 and variance 0 4 δ σ2 = Var XF(X)+ ∞tdF(t) FX . (2.4) F µ2 × − 2 F (cid:18) ZX (cid:19) Under H , n1/2δˆ(F,G ) is asymptotically normal with mean 0 and variance σ2 = σ2 . 0 0 0 G0 Proof: First note that δ F max X ,X (X +X ) i j i j { }− 2 δˆ δ = Xi6=Xj (cid:20) (cid:21) F − F n(n 1)X¯ − φ(X ,X ) i j i=j = X6 X , n(n 1)X¯ − 3 where δ F φ(X ,X ) = max X ,X (X +X ). i j i j i j { }− 2 Let define φ(X ,X ) i j i=j T∗ = X6 X . n(n 1) − By the standard theory of U-statistics, if E[φ2(X ,X )] < , as n 1 2 ∞ −→ ∞ √nT n∗ d N(0,1) σ → ∗ where σ2 = 4 Var(φ (X)) 1 × ∗ and φ (x) = E[φ(x,X)]. 1 Now by the strong law of large numbers we have X¯ a.s. µ and hence, by Slutsky theorem F → √n[δˆ(F,G ) δ(F,G )] is asymptotically normal with mean 0 and variance σ2 = σ∗2. 0 − 0 F µ2F Under H , δ(F,G ) = 0 and σ2 = σ2 . (cid:3) 0 0 0 G0 ˆ A small value of δ(F,G ) indicates that testing H against H is significant. Thus, 0 0 1 we reject H at level α if n1/2δˆ(F,G )/σ < z , where z is αth quantile of the standard 0 0 G0 α α normal distribution. In the case G (x) = E (x) = 1 exp( λx), x 0 and λ > 0, the problem is testing 0 λ − − ≥ the null hypothesis H : F is an exponential distribution against the alternative hypothesis 0 H : F is IFRA and not exponential. It can be shown that δ = 3 and σ2 = 1 . By 1 Eλ 2 Eλ 12 the above theorem, under H , √n(δˆ 3/2) is asymptotically normal with mean 0 and 0 F − variance 1 . Thus we reject H in favor of H if √12n(δˆ 3/2) < z . 12 0 1 F − α ˆ In the following we find the exact distribution of δ under the hypothesis F is an F ˆ exponential distribution. First, note that we can rewrite δ as F n n 2 (i 1)X c D (i) i:n i − δˆ = Xi=1 = Xi=1 (2.5) F n(n 1)X¯ n − D i i=1 X 4 where X is the ith order statistic of X ’s, (i) i n 2 (j 1) − j=i D = (n i+1)(X X ), c = X i (i) (i 1) i:n − − − (n 1)(n i+1) − − and assuming X = 0. Now, by the same arguments as in Langenberg and Srinivasan (0) (1979), we will get the following result. Theorem 2.2 Let F be an exponential distribution, then n n c x ˆ i:n P δ x = 1 − I(x < c ) (2.6) F i:n { ≤ } − c c i:n j:n i=1 j=1 − XY j=i 6 where I(.) is the usual indicator function. By using Theorem 2.2, we tabulate the critical point of √12n(δˆ 3/2) under expo- F − nentiality for small sample sizes ( 40) in Table 1. So, for small sample sizes, we reject ≤ exponentiality in favor of IFRA-ness if √12n(δˆ 3/2) is smaller than the critical point F − in Table 1 corresponding with the level of significance chosen. El-Bassiouny(2003)hasconsideredtheproblemoftestingexponentialityagainstIFRA- ness in the alternative and proposed a class of test. His test is based on the test statistics Xr+1 2 min Xr+1,Xr+1 i { i j }− 2 ∆ˆ = Xi<Xj (cid:18) (cid:19) r+1 n(n 1)X¯r+1 − and large values of ∆ˆ are significant for the considered problem of testing. If r = 0, r+1 X i 2 min X ,X i j { }− 2 ∆ˆ = Xi<Xj (cid:18) (cid:19) 1 ¯ n(n 1)X − 2 min X ,X i j { } 1 i<j = XX (2.7) n(n 1)X¯ − 2 − 5 Table 1: Critical values of √12n(δˆ 3/2) for small sample sizes F − n α: Lower Tail α: Upper Tail 0.01 0.05 0.1 0.1 0.05 0.01 2 -2.400500 -2.204541 -1.959592 1.959592 2.204541 2.400500 3 -2.575752 -2.051328 -1.658360 1.658360 2.051328 2.575752 4 -2.560006 -1.918143 -1.516280 1.516280 1.918143 2.560006 5 -2.517587 -1.846175 -1.458997 1.458997 1.846175 2.517587 6 -2.482569 -1.807959 -1.424446 1.424446 1.807959 2.482569 7 -2.458901 -1.781575 -1.400767 1.400767 1.781575 2.458901 8 -2.441786 -1.762473 -1.383974 1.383974 1.762473 2.441786 9 -2.428500 -1.748106 -1.371312 1.371312 1.748106 2.428500 10 -2.417939 -1.736865 -1.361442 1.361442 1.736865 2.417939 11 -2.409356 -1.727862 -1.353531 1.353531 1.727862 2.409356 12 -2.402239 -1.720450 -1.347047 1.347047 1.720450 2.402239 13 -2.396243 -1.714193 -1.341635 1.341635 1.714193 2.396243 14 -2.391124 -1.708937 -1.337050 1.337050 1.708937 2.391124 15 -2.386703 -1.704422 -1.333116 1.333116 1.704422 2.386703 16 -2.382846 -1.700502 -1.329703 1.329703 1.700502 2.382846 17 -2.379451 -1.697066 -1.326714 1.326714 1.697066 2.379451 18 -2.376441 -1.694029 -1.324074 1.324074 1.694029 2.376441 19 -2.373754 -1.691327 -1.321727 1.321727 1.691327 2.373754 20 -2.371340 -1.688906 -1.319625 1.319625 1.688906 2.371340 21 -2.369160 -1.686725 -1.317732 1.317732 1.686725 2.369160 22 -2.367182 -1.684749 -1.316018 1.316018 1.684749 2.367182 23 -2.365378 -1.682952 -1.314460 1.314460 1.682952 2.365378 24 -2.363726 -1.681309 -1.313036 1.313036 1.681309 2.363726 25 -2.362209 -1.679803 -1.311730 1.311730 1.679803 2.362209 26 -2.360810 -1.678415 -1.310529 1.310529 1.678415 2.360810 27 -2.359516 -1.677134 -1.309419 1.309419 1.677134 2.359516 28 -2.358316 -1.675947 -1.308392 1.308392 1.675947 2.358316 29 -2.357199 -1.674844 -1.307437 1.307437 1.674844 2.357199 30 -2.356158 -1.673817 -1.306548 1.306548 1.673817 2.356158 31 -2.355185 -1.672857 -1.305718 1.305718 1.672857 2.355185 32 -2.354273 -1.671960 -1.304942 1.304942 1.671960 2.354273 33 -2.353418 -1.671118 -1.304214 1.304214 1.671118 2.353417 34 -2.352613 -1.670326 -1.303529 1.303529 1.670326 2.352612 35 -2.351855 -1.669581 -1.302885 1.302885 1.669581 2.351854 36 -2.351140 -1.668878 -1.302278 1.302278 1.668878 2.351138 37 -2.350454 -1.668214 -1.301704 1.301704 1.668214 2.350461 38 -2.349821 -1.667586 -1.301161 1.301161 1.667586 2.349821 39 -2.349184 -1.666990 -1.300647 1.300647 1.666991 2.349213 40 -2.348688 -1.666426 -1.300159 1.300159 1.666426 2.348636 6 On the other hand, using the fact that δ = 3, Eλ 2 max X ,X i j { } δˆ(F,E ) = Xi6=Xj δ λ n(n 1)X¯ − Eλ − 2 [(X +X ) min X ,X ] i j i j − { } 3 i<j = XX n(n 1)X¯ − 2 − 2 (X +X ) min X ,X i j i j − { } " # 3 i<j i<j = XX XX n(n 1)X¯ − 2 − n 2 (n 1) X min X ,X i i j − − { } " # 3 i=1 i<j = X XX n(n 1)X¯ − 2 − 2 min X ,X i j { } 1 i<j = XX 2 − n(n 1)X¯ − = ∆ˆ . 1 − That is, for the case when r = 0 and G is an exponential distribution, the proposed test 0 is equivalent to that of El-bassiouny (2003). It is worth to mention that our test is consistent; that is, if β (F) is the power of our n test, thenunder thealternative hypothesis, lim β (F) = 1 which followsfromTheorem n n →∞ 2.1 and Problem 2.3.16 in Lehmann (1999), 3 The Two-Sample Problem Inthissection, weconsiderthetwo-sampleproblemwhenGisunknown. LetX ,X ,...,X 1 2 n and Y ,Y ,...,Y be two independent random samples from unknown distribution func- 1 2 m ˆ ˆ tions F and G, respectively, and N = n + m. Assume that δ is as in (2.3) and δ is F G defined similarly in terms of Y ,...,Y and Y¯. The test statistic 1 m ˆ ˆ ˆ δ(F,G) = δ δ F G − which is the estimate of the measure in (1.2) is used for testing the null hypothesis H : F = G (3.8) 0 ∗ 7 against the alternative hypothesis H : F G and F = G. (3.9) 1 ≤∗ 6 ∗ ˆ Small values of δ(F,G) are significant for testing H against H . In the following theorem 0 1 ˆ we obtain the asymptotic distribution of δ(F,G). Theorem 3.1 If E[max X ,X δF(X +X )]2 and E[max Y ,Y δG(Y +Y )]2 are { 1 2}− 2 1 2 { 1 2}− 2 1 2 finite and n and m such that n c, c (0, 1], then √N(δˆ(F,G) δ(F,G)) is → ∞ N → ∈ 2 − asymptotically normal with mean 0 and variance N N σ2 = σ2 + σ2 F,G n F m G where σ2 is given in (2.4) and σ2 is defined similarly in terms of Y. F G Proof: It is easy to see that √N(δˆ(F,G) δ(F,G)) √mσ − = F [√n(δˆ δ )/σ ] F F F σ mσ2 +nσ2 − F,G F G √nσ p G [√m(δˆ δ )/σ ]. G G G − mσ2 +nσ2 − F G From the result of Theorem 2.1, as bothpn and m , we have that → ∞ √n(δˆ δ )/σ d N(0,1) and √m(δˆ δ )/σ d N(0,1). F F F G G G − → − → ˆ ˆ Since δ and δ are independent, the required result follows from the fact that convergence F G indistributionisclosedundertheconvolutionofindependentsequencesofrandomvariables (cf. Theorem 6.6 of Gut (2009), page 169). In practice σ2 is unknown, but it can be estimated by the consistent estimator F,G N N σˆ2 = σˆ2 + σˆ2 (3.10) F,G n F m G where n φˆ2(X ) i σˆ2 = 4 Xi=1 , F × nX¯2 n ˆ 1 δ ˆ F φ(X ) = [max X ,X (X +X )] i i j i j n 1 { }− 2 − j=1 X j=i 6 8