ebook img

Testing convexity of a discrete distribution PDF

0.14 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Testing convexity of a discrete distribution

7 Testing convexity of a discrete distribution 1 0 2 n Fadoua Balabdaoui1, C´ecile Durot2∗and Fran¸cois Koladjo3 a J 1CEREMADE, Universit´e Paris-Dauphine, 75775, Paris, France 6 1 2Modal’x, Universit´e Paris Nanterre, F-92001, Nanterre, France 3ENSPD, Universit´e Parakou, BP 55 Tchaourou, B´enin ] T S Abstract . h Based on the convex least-squares estimator, we propose two different procedures t a for testing convexity of a probability mass function supportedon N with an unknown m finite support. The procedures are shown to be asymptotically calibrated. [ 1 1 The testing problem v 7 6 3 Estimating a probability mass function (pmf) under a shape constraint has attracted 4 attention in the very last years, see Jankowski and Wellner (2009), Durot et al. (2013), 0 Balabdaoui et al. (2013), Giguelay (2016). With a more applied point of view, Durot et al. . 1 (2015) developped a nonparametric method for estimating the number of species under 0 7 the assumption that the abundance distribution is convex. As the method applies only 1 if the convexity assumption is fulfilled, it would be sensible, before to implement it, to : v test whether or not the assumption is fulfilled. This motivates the present paper where a i X method for testing convexity of a pmf on N is developped. r We consider i.i.d. observations X ,...,X from an unknown pmf p on N. Assuming a 1 n 0 that p has a finite support 0,...,S with an unknown integer S > 0, we aim at testing 0 { } the null hypothesis H : ”p is convex on N” versus the alternative H : ”p is not convex.” 0 0 1 0 With p the empirical pmf (defined by p (j) = n−1 n for all j N), a natural n n i=11{Xi=j} ∈ procedure rejects H0 if pn is too far from 1, the set oPf all convex probability mass functions C on N. Hence, with q 2 = (q(j))2 for a sequence q = q(j),j N , we reject H if k k j∈N { ∈ } 0 infp∈C1 kpn−pk is too large.PIt is proved in Durot et al. (2013, Sections 2.1 to 2.3) that the minimizer exists, is unique, and can be implemented with an appropriate algorithm, so our critical region takes the form T > t where T = √n p p , p is the minimizer of n α,n n n n n { } k − k p p 2 over p , and t is an appropriate quantile. The main difficulty now is to n 1 α,n k − k ∈ C b b find t in such a way that the corresponding test has asymptotic level α. α,n We consider below two different constructions of t . First, we will define t to be α,n α,n the (1 α)-quantile of a random variable whose limiting distribution coincides with the − ∗Corresponding author. Email address: [email protected] 1 limiting distribution of T under H . Next, we will calibrate the test under the least n 0 favorable hypothesis. Both methods require knowledge of the limiting distribution of T n under H . To this end, we need notation. For all p = p(j),j N and k N 0 we 0 { ∈ } ∈ \{ } set ∆p(k) = p(k +1) 2p(k)+p(k 1) (hence p is convex on N iff ∆p(k) 0 for all k) − − ≥ and a given k N 0 is called a knot of p if ∆p(k) > 0. Furthermore, we denote by g a 0 ∈ \{ } (S +2) centered Gaussian vector whose dispersion matrix Γ has component (i+1,j +1) 0 equal to p (i) p (i)p (j) for all i,j = 0,...,S + 1, and by g the minimizer of {i=j} 0 0 0 0 1 − S+1(g(k) g (k))2 over the set of all functions g = (g(0),...,g(S +1)) RS+2 such k=0 − 0 K0 b ∈ Pthat ∆g(k) 0 forall k 1,...,S withpossible exceptions atthe knotsof p0. Existence, ≥ ∈ { } uniqueness and characterization of g are given in Balabdaoui et al. (2014, Theorem 3.1). 0 The asymptotic distribution of T under H is given below. n 0 b Theorem 1.1. Under H , T d T as n , where T = S+1(g (k) g (k))2. 0 n −→ 0 → ∞ 0 k=0 0 − 0 P b b b 2 Calibrating by estimating the limiting distribution InordertoapproximatethedistributionofT underH ,wewillconstructarandomvariable n 0 that weakly converges to T (see Theorem 1.1) and which can be approximated via Monte- 0 Carlo simulations. To this end, let S = max X ,...,X . Also, let g be a random n 1 n n b { } vector which, conditionally on (X ,...,X ), is distributed as a centered Gaussian vector 1 n of dimension S +2 with dispersion matrix Γ , the matrix with component (i+1,j +1) n n equal to p (i) p (i)p (j) for all i,j = 0,...,S +1. Now, let g be the minimizer {i=j} n n n n n 1 − of Sn+1(g(k) g (k))2 over a set that approaches as n . Below, we give an k=0 − n Kn K0 → ∞b extPended version of Balabdaoui et al. (2014, Theorem 3.3), with the same choice for n. K Theorem 2.1. Let (vn)n∈N be a sequence of positive numbers that satisfy vn = o(1) and v n−1/2. Define g and g as above with the set of all functions g = (g(0),...,g(S + n n n n n ≫ K 1)) RSn+2 such that ∆g(x) 0 for all x 1,...,S that satisfy ∆p (x) v . Then, g n n n n uniq∈uely exists, both g andbT≥:= Sn+1(g∈({k) g (k}))2 are measurable, an≤d conditionally n n k=0 n − n b b on X ,...,X we have T d T Pin probability as n , with T as in Theorem 1.1. 1 n b n −b→ 0 b → ∞ 0 We now state the main rbesult ofbthe section, again defining Tn asbin Theorem 2.1. Theorem 2.2. Let α (0,1)andtα,n the conditional(1 α)-qbuantileof Tn givenX1,...,Xn. ∈ − If p is convex on N and supported on 0,...,S , then limsup P T > t α. 0 { } n→∞ bn α,n ≤ (cid:0) (cid:1) In order to implement the test, we need to compute an approximation of t . This can α,n be done using Monte-Carlo simulations as follows. Having observed X ,...,X , draw in- 1 n (b) (b) dependent sequences (Z ) for b 1,...,B , where all variables Z are i.i.d. stan- i 1≤i≤Sn ∈ { } i dardGaussianandB > 0isaninteger. Then, forallb, computeg(b) = Γ1/2(Z(b),...,Z(b) )T n n 0 Sn+1 (b) andtheleast-squaresprojectiong onto usingthealgorithmdescribedinBalabdaoui et al. n n K (2014). Then, t canbeapproximated by the(1 α)-quantile ofthe empirical distribution α,n coresponding to Sn+1(g(b)(k) bg(b)(k))2, with b− 1,...,B . k=0 n − n ∈ { } P b 2 3 Calibrating under the least favorable hypothesis We consider below an alternative calibration that is easier to implement than the first one. ˜ Consider the set of convex functions g on 0,...,S + 1 ; that is g if and only 0 0 K { } ∈ K if ∆g(k) 0 for all k 1,...,S . Similarly, let be the set of convex functions on ≥e ∈ { } Kn 0,...,S +1 . Let g˜ be the least squares projection of g onto and g˜ that of g onto { n } 0 e 0 K0 n n , with g and g as in Section 2. Finally, let t˜ be the conditional (1 α)-quantile of Kn 0 n n,α e − T := Sn+1(g˜ (k) g (k))2 given (X ,...,X ). Then, we have the following theorem. en k=0 n − n 1 n P Teheorem 3.1. If p is convex on N and supported on 0,...,S , then limsup P T > 0 { } n→∞ n t˜ α with equality if p is the triangular pmf with support 0,...,S . (cid:0) n,α 0 ≤ { } (cid:1) ThetestisasymptoticallycalibratedsincetheTypeIerrordoesnotexceedα. Itreaches precisely α when p is triangular, which can be viewed as the least favorable case for testing 0 convexity. The theorem above does not exclude existence of other least favorable cases. 4 Simulations To illustrate the theory, we have considered four pmf’s supported on 0,...,5 . It fol- { } lows from Theorem 7 in Durot et al. (2013) that any convex pmf on N can be written as π where π [0,1], π = 1 and (i) = 2(k i) [k(k+1)]−1, the triangular k≥1 kTj k ∈ k≥1 k Tk − + Ppmf supported on 0,...,k P1 . Under H , we considered the triangular pmf p(1) = { − } 0 0 T6 and p(2) = 6 π with π = 0,π = π = 1/6, π = 0 and π = π = 1/3, which 0 k=1 kTk 1 2 3 4 5 6 has knots aPt 2,3 and 5. Under H we considered p(1) the pmf of a truncated Poisson 1 1 (2) (1) on 0,...,5 with rate λ = 1.5 and p the pmf equal to p on 2,...,5 such that { } 1 0 { } (2) (2) (1) (1) (p (0),p (1)) = (p (0) + 0.008,p (1) 0.008). To investigate the asymptotic type I 1 1 0 0 − error and power of our tests, we have drawn n 500,5000,50000 rv’s from the afore- ∈ { } mentioned pmf’s. Here, α = 5% and t˜ was estimated for each drawn sample using n,α B = 1000 i.i.d. copies of g . The rejection probability was estimated using N = 500 repli- n cations of the whole procedure. For the first convexity test, we considered the sequences v log(logn)n−1/2 and n−1/4. We also added the sequence v 0 to compare our ap- n n ≡ ≡ proachpwith the naive one where no knot extraction is attempted. The results are reported in Tables 1 and 2. The first conclusion is that choosing v = 0 does not give a valid test as the type I n error can be as large as four times the targeted level! This can be explained by the fact that choosing v = 0 makes the set on which g is projected to be the largest possible n n and hence g g the smallest possible. This distance is hence stochastically smaller n n k − k than the actual limit, yielding a large probability of rejection.The second conclusion is that b the first test depends on the choice of v . Small sequences makes again the type I error n large when the true convex pmf has only a tiny change in the slopes at its knots or has no (1) knots as it is the case for p = T . The question is then open as to how to choose such a 0 6 3 n = 500 n = 5000 n = 50000 PMF 0 (log(logn))1/2 n−1/4 0 (log(logn))1/2 n−1/4 0 (log(logn))1/2 n−1/4 n1/2 n1/2 n1/2 (1) p 0.226 0.106 0.054 0.286 0.092 0.062 0.256 0.086 0.050 0 (2) p 0.190 0.046 0.020 0.310 0.066 0.018 0.344 0.054 0.016 0 (1) p 1 1 1 1 1 1 1 1 1 1 (2) p 0.234 0.102 0.038 0.354 0.166 0.082 0.932 0.816 0.630 1 (1) (2) (1) Table 1: Values of the asymptotic type I error for the pmfs p and p and power for p 0 0 1 (2) and p of the convexity test based on the sequence (v ) . The asymptotic level is 5%. 1 n n PMF n = 500 n = 5000 n = 50000 (1) p 0.048 0.044 0.058 0 (2) p 0.014 0.032 0.020 0 (1) p 1 1 1 1 (2) p 0.042 0.060 0.678 1 (1) (2) (1) Table 2: Values of the asymptotic type I error for the pmfs p and p and power for p 0 0 1 (2) and p of the test based on the least favorable hypothesis. The asymptotic level is 5%. 1 sequence so that the test has the correct asymptotic level. The second testing approach is, (1) as expected, conservative when the true pmf is not triangular. For the true pmf p which 1 (2) strongly violates the convexity constraint, the power is equal to 1. For p , which has 1 only a small flaw at 2 with a change of slope equal to 0.008, the power values obtained − with this second approach are comparable to those obtained with the first testing method and v n−1/4. This is somehow expected as it is the largest sequence among the ones n ≡ considered, yielding the largest distance between g and its L projection. n 2 5 Proofs In the sequel, for all s > 0 and u = (u(0),...,u(s+1)) Rs+2, we set u = s+1(u(k))2. ∈ k ks k=0 P 5.1 Preparatory lemmas Lemma 5.1. Let s > 0 be an integer, Rs+2 a non-empty closed convex set, and K ⊂ u = (u(0),...,u(s + 1)) Rs+2. Then, the minimizer of g u over g uniquely s ∈ k − k ∈ K exists. Moreover, denoting Φ(u) this minimizer, the application Φ is measurable from Rs+2 to Rs+2; and the application u u Φ(u) is measurable. s 7→ k − k Proof: It follows from standard results on convex optimization that Φ(u) uniquely exists 4 for all u, and Φ(u) Φ(v) u v for all u and v in Rs+2. This means that Φ is a s s k − k ≤ k − k continuous function, whence it is measurable. Now, the function u (u,Φ(u)) is continu- 7→ ous, whence measurable. By continuity of the norm, this ensures that the application that maps u into u Φ(u) is measurable. (cid:3) s k − k Lemma 5.2. With g as in Section 1, (∆g (1),...,∆g (S)) is a centered Gaussian vector 0 0 0 with invertible dispersion matrix. Proof: For notational convenience, we assume in the sequel that S 3. The case S 2 ≥ ≤ can be handled likewise. Let B be the S (S +1)-matrix which j-th line has components × j,j+1 and j+2 equal to 1, 2 and 1 respectively while the other components are zero, for − j = 1,...,S 1, and S-th line has components equal to zero except the penultimate and − the last one, which are equal respectively to 1 and 2. We have p (S+1) = p (S+1) = 0 n 0 − almost surely so that in the limit, g (S +1) = 0 almost surely and 0 (∆g (1),...,∆g (S))T = B(g (0),...,g (S))T . (5.1) 0 0 0 0 Hence, (∆g (1),...,∆g (S)) is a centered Gaussian vector with dispersion matrix 0 0 V = BΣ BT, (5.2) 0 where Σ is the dispersion matrix of the vector on the right hand side of (5.1), i.e. with 0 component (i+1,j +1) equal to p (i) p (i)p (j) for all i,j = 0,...,S. Note that {i=j} 0 0 0 1 − Σ is obtained by deleting a line and a column of zeros in Γ , the dispersion matrix of g . 0 0 0 It remains to prove that V is invertible. Let √p be the column vector in RS+1 with 0 components p (0), ..., p (S) and let diag(√p ) be the (S + 1) (S + 1) diagonal 0 0 0 × matrix with dpiagonal compponents p (0),..., p (S). Denoting by I the identity matrix 0 0 on RS+1, the matrix (in the canopnical basis)passociated with the orthogonal projection from RS+1 onto the orthogonal supplement of the linear space generated by √p is given 0 by I Π = I √p √pT. The linear subspace of RS+1 generated by √p has dimension − 0 − 0 0 0 1, so its orthogonal supplement in RS+1 has dimension S, whence rank(I Π ) = S. Now, 0 − Σ = diag(√p )(I Π )diag(√p ) where diag(√p ) is invertible, so we obtain 0 0 − 0 0 0 1/2 rank(Σ ) = rank(Σ ) = S. (5.3) 0 0 This means that the kernel of Σ1/2 is a linear subspace of RS+1 whose dimension is equal 0 to 1. Let us describe more precisely the kernel. Let λ be the column vector in RS+1 whose components are all equal to 1. Using that S p (k) = 1, it is easy to see that Σ λ is k=0 0 0 the null vector in RS+1. Therefore, λTΣ λ =P0. This means that Σ1/2λ 2 = 0 with . 0 k 0 k k k the euclidian norm in RS+1. Hence, Σ1/2λ is the null vector in RS+1. This means that the 0 kernel of Σ1/2 is the linear subspace of RS+1 generated by λ. 0 Next, let us determine the kernel of V. Let µ RS with Vµ = 0. Then, µTVµ = 0 which, according to (5.2), implies that Σ1/2BTµ 2∈= µTBΣ BTµ = 0. This means that k 0 k 0 5 Σ1/2BTµ = 0. Since the kernel of Σ1/2 is the linear subspace of RS+1 generated by λ, 0 0 we conclude that BTµ = aλ for some a R. Denote by µ ,...,µ the components of 1 S ∈ µ. By definition of B and λ, the vector µ satisfies the equations µ = a, µ 2µ = a, 1 2 1 − µ 2µ +µ = a for all k 3,...,S and µ 2µ = a. Arguing by induction, we k−2 k−1 k S−1 S − ∈ { } − obtain that this is equivalent to 2µ = ak(k+1) for all k 1,...,S and 2µ = µ a. k S S−1 ∈ { } − Combining the first equation with k = S,S 1 to the second equation yields − a(S 1)S aS(S +1) = a+ − . − 2 Therefore, (S 1)S a 1 − +S(S +1) = 0. (cid:18) − 2 (cid:19) This reduces to a(2+S2 +3S)/2 = 0, which implies that a = 0. This mean that µ is the null vector in RS and therefore, rank(V) = S. This means that V is invertible. (cid:3) Lemma 5.3. Let , g and g be definedas in Section 1. Then, g g has a continuous 0 0 0 0 0 S K k − k distribution. b b Proof: Let F be the cumulative distribution function of g g . Note that this 0 0 S k − k quantity is a properly defined random variable by the measurability proved in Lemma 5.1. b We aim to prove that F is a continuous function on [0, ), using similar arguments as in ∞ the proof of Lemma 1.2 in Gaenssler et al. (2007). First, we will prove that F(t) > 0 for all t 0. (5.4) ≥ To this end, note that F(0) = P(g ). This means that 0 0 ∈ K F(0) P(δ A) (5.5) ≥ ∈ with δ = (∆g (1),...,∆g (S)) and A the set of all vectors (u ,...,u ) RS such that 0 0 1 S ∈ u 0 for all k 1,...,S with possible exceptions at points k that are knots of p . k 0 ≥ ∈ { } From Lemma 5.2, the vector δ is a centered Gaussian vector whose dispersion matrix is invertible. Therefore, the vector possesses a density with respect to the Lebesgue measure on RS that is strictly positive on the whole space RS. This implies that the probability that the vector belongs to a Borel set whose Lebesgue measure is not equal to zero, is strictly positive. In particular, P(δ A) > 0. Thus, it follows from (5.5) that F(0) > 0. ∈ Combining this with the monotonicity of the function F completes the proof of (5.4). Next, we prove that the function log(F) (which is well defined on[0, )thanks to (5.4)) ∞ is concave on [0, ). For u = (u(0),...,u(S + 1)) RS+2, let us write u the minimizer ∞ ∈ of S+1(g(k) u(k))2 over g . For all t [0, ), we define A to be the set of allPuk==0(u(0),−...,u(S + 1)) ∈RKS+02 such that ∈u ∞u S t. Note tthabt At is a Borel ∈ k − k ≤ set in (RS+2) for all t since, according to Lemma 5.1, the application u u u is S B b 7→ k − k b 6 measurable. Finally, let µ = P g−1 be the distribution of g on RS+2 endowed with the ◦ 0 0 Borel σ-algebra (RS+2). This means that B F(t) = µ(A ). (5.6) t Fix λ (0,1), t,t′ [0, ), and consider an arbitrary x λAt +(1 λ)At′. Then, x ∈ ∈ ∞ ∈ − takes the form x = λu+ (1 λ)v for some (non necessarily unique) u At and v At′. − ∈ ∈ By definition, both u and v belong to the convex set and therefore, λu+(1 λ)v . 0 0 K − ∈ K Since x minimizes g x over g , we conclude that S 0 kb − kb ∈ K b b b x x λu+(1 λ)v x λ(u u)+(1 λ)(v v) , S S S k − k ≤ k − − k ≤ k − − − k using that xb= λu+(1 bλ)v. It thebn follows frombthe triangle ineqbuality that − x x λ u u +(1 λ) v v . S S S k − k ≤ k − k − k − k Since u At and v Abt′, we have u b u S t and v b v S t′, which implies that ∈ ∈ k − k ≤ k − k ≤ x x S λt+(1 λ)t′. Hence, x Aλt+(1−λ)t′. This means that k − k ≤ − ∈b b b λAt +(1 λ)At′ Aλt+(1−λ)t′. (5.7) − ⊂ Now, µ = P g−1 is a Gaussian probability measure on (RS+2), so it follows from Lemma ◦ 0 B 1.1 in Gaenssler et al. (2007) that µ is log-concave in the sense that µ (λA+(1 λ)B) µ(A)λµ(B)1−λ ⋆ − ≥ for all λ (0,1) and A,B (RS+2), with µ the inner measure pertaining to µ. Applying ⋆ ∈ ∈ B this with A = At and B = At′, and combining with (5.7) yields µ⋆(Aλt+(1−λ)t′) µ(At)λµ(At′)1−λ ≥ for all t,t′ [0, ). The same inequality remains true with µ replaced by µ. Using (5.6), ⋆ ∈ ∞ and taking the logarithm on both sides of the inequality, we conclude that log F(λt+(1 λ)t′) λlog F(t) +(1 λ)log F(t′) − ≥ − (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) for all λ (0,1), and t,t′ [0, ). This means that the function log(F) is concave on ∈ ∈ ∞ [0, ). Recalling (5.4), we conclude that the function log(F) is continuous on [0, ), wh∞ence F is continuous on [0, ). This completes the proof of Lemma 5.3. ∞(cid:3) ∞ 5.2 Proofs of the main results Proof of Theorem 1.1: Assume that p is convex on N and supported on 0,...,S . It 0 { } can be proved in the same manner as Theorem 3.2 in Balabdaoui et al. (2014) that √n(p p ,p p ) (g ,g ) as n . (5.8) n 0 n 0 0 0 − − ⇒ → ∞ b b 7 as a joint weak convergence on 0,...,S+1 . Now, it follows fromBalabdaoui et al. (2014, { } Proposition 3.5) that with probability one, p is supported on 0,...,S+1 for sufficiently n { } large n, and p also is supported on that set by definition. Hence, p p = p p . n n n n n S b k − k k − k(cid:3) Combining this with (5.8) completes the proof of the theorem. Proof of Thbeorem 2.1: Clearly, is a non-empty closed convexbsubset of RbSn+2. n K Hence, we consider the specific case of s = S and = in Lemma 5.1. In the notation n n K K of the lemma, we have g := Φ(g ), so that g is uniquely defined. Moreover, since both n n n Φ and g are measurable, we conclude that g = Φ(g ) is measurable. Likewise, g n n n n b b k − g is measurable. This proves the fist two assertions in Theorem 2.1. Next, similar to nkSn b Balabdaoui et al. (2014, Theorem 3.3), the following joint weak convergence on 0,...,S+ b { 1 can be proved: conditionally on X ,...,X , (g ,g ) (g ,g ) in probability as n 1 n n n 0 0 }. The result follows, since S = S with provability that⇒tends to one. →(cid:3) n ∞Proof of Theorem 2.2: Assume that p is conbvex on N wbith support 0,...,S . By 0 { } Theorem 1.1, T converges in distribution to g g as n . Combining this n 0 0 S k − k → ∞ with Lemma 5.3 together with the fact that convergence in distribution to a continuous b distribution implies uniform convergence of the corresponding distribution functions yields P (T > t ) = P ( g g > t )+o(1). n α,n 0 0 S α,n k − k Now, it follows from Theorem 2.1 that b P ( g g > t ) = P ( g g > t X ,...,X )+o (1) k 0 − 0kS α,n k n − nkSn α,n | 1 n p where by definbition of tα,n, the probabilitybon the right-hand side is less than or equal to α for all n. Combining this with the preceding display completes the proof. (cid:3) Proof of Theorem 3.1: We have g˜ g g g since , whence 0 0 S 0 0 S n 0 k − k ≥ k − k K ⊂ K P Tn > t˜α,n = P g0 g0bS > t˜α,n +o(1e) k − k (cid:0) (cid:1) P (cid:0) g˜ g > t˜ (cid:1)+o(1), (5.9) 0 0 S α,n ≤ kb − k (cid:0) (cid:1) using similar arguments as for the proof of Theorem 2.2 for the equality. Using arguments similar to those in Theorem 3.3 of Balabdaoui et al. (2014), we can show that conditionally on X ,...,X , (g˜ ,g ) (g˜ ,g )almost surely as n . It follows that conditionally on 1 n n n 0 0 → → ∞ X ,...,X , g˜ g converges weakly to g˜ g almost surely whence 1 n k n − nkSn k 0 − 0kS P T > t˜ P g˜ g > t˜ X ,...,X +o(1) n α,n ≤ k n − nkSn α,n| 1 n (cid:0) (cid:1) = α+(cid:0) o(1), by definition of t˜ . (cid:1) α,n The inequality in (5.9) becomes an equality if p is triangular, so the theorem follows. (cid:3) 0 References Balabdaoui, F., Durot, C. andKoladjo, F. c.(2014). Onasymptotics ofthediscrete convex lse of a pmf. To appear in Bernoulli . 8 Balabdaoui, F.,Jankowski, H.,Rufibach, K.andPavlides, M.(2013). Asymptotic distribution of the discrete log-concave mle and some applications. JRSS-B. 75 769–790. Durot, C., Huet, S., Koladjo, F. and Robin, S. (2013). Least-squares estimation of a convex discrete distribution. Comput. Statist. Data Anal. 67 282–298. URL http://dx.doi.org/10.1016/j.csda.2013.04.019 Durot, C., Huet, S., Koladjo, F. and Robin, S. (2015). Nonparametric species richness estimation under convexity constraint. Environmetrics 26 502–513. Gaenssler, P., Molna´r, P. and Rost, D. (2007). On continuity and strict increase of the cdf for the sup-functional of a gaussian process with applications to statistics. Results in Mathematics 51 51–60. Giguelay, J. (2016). Estimation of a discrete probability under constraint of k-monotony. arXiv preprint arXiv:1608.06541 . Jankowski, H. K. and Wellner, J. A. (2009). Estimation of a discrete monotone distribution. Electronic journal of statistics 3 1567. 9

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.