ebook img

Dependent Mixtures of Geometric Weights Priors PDF

0.27 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Dependent Mixtures of Geometric Weights Priors

Dependent Mixtures of Geometric Weights Priors Spyridon J. Hatjispyros 1, Christos Merkatas Department of Mathematics, University of the Aegean Karlovassi, Samos, GR-832 00, Greece. Abstract A new approach on the joint estimation of partially exchangeable observations is pre- 7 sented by constructing pairwise dependence between m random density functions, each 1 0 of which is modeled as a mixture of geometric stick breaking processes. This approach is 2 based on a new random central masses version of the Pairwise Dependent Dirichlet Pro- n cess prior mixture model (PDDP) first introduced in Hatjispyros et al. (2011). The idea a J is to create pairwise dependence through random measures that are location-preserving- 6 expectations ofDirichlet randommeasures. Ourcontention isthatmixturemodelingwith 2 Pairwise Dependent Geometric Stick Breaking Process (PDGSBP) priors is sufficient for ] E predictionandestimation purposes;moreover theassociated Gibbssampleris muchfaster M and easier to implement than its Dirichlet Process based counterpart. To this respect, we . t provide a-priori-synchronized comparison studies under sparse m-scalable synthetic and a t real data examples. s [ Keywords: Bayesian nonparametric inference; Mixture of Dirichlet process; Geometric 1 v stick breaking weights; Geometric Stick Breaking Mixtures; Dependent Process. 6 7 7 7 0 . 1. Introduction. In Bayesian nonparametrics, the use of nonparametric priors such as 1 0 the Dirichlet process (Ferguson, 1973), is justified by the assumption that the observations 7 are exchangeble. That is, the distribution of (X ,...,X ) coincides with the distribution of 1 1 n : (X ,...,X ), for all π ∈ S(n), where S(n) is the group of permutations of {1,...,n}. v π(1) π(n) i However, in real life applications data are often partially exchangeable. For example the data X may consist of independent observations sampled from m populations, or may be sampled from r a anexperiment conducted inmdifferent geographical places. This meansthat thejointlaw isin- variant under permutations within m subgroups of observations (X ,...,X ), j = 1,...,m, j,1 j,nj then for all π ∈ S(n ) j j ((X ,...,X ),...,(X ,...,X ))=d ((X ,...,X ),...,(X ,...,X ))). (1) 1,1 1,n1 m,1 m,nm 1,π1(1) 1,π1(n1) m,πm(1) m,πm(nm When the exchangeability assumption fails one needs to use non exchangeable priors. There has been substantial research interest after the seminal work of MacEachern (1999) in the con- structionofdependent stochasticprocesses andusingthemaspriorsonBayesian nonparametric models. These processes are distributions over a collection of measures indexed by values in 1Corresponding author. Tel.:+30 22730 82326 E-mail address: [email protected] 1 some covariate space, such that the marginal distribution is described by a known nonpara- metric prior. The key idea is to induce dependence between a collection of random probability measures (P ) , where each P comes from a Dirichlet process (DP) with concentration j 1≤j≤m j parameter c > 0 and base measure P . Such random probability measures typically are used 0 in mixture models to generate random densities f (x) = K(x|θ)P (dθ) (Lo, 1984). j Θ j There is a variety of ways that a DP can be extended to a dependent DP. Most of them use R the stick-breaking representation (Sethuraman, 1994), that is ∞ P = w δ , k θk k=1 X where (θ ) are independent and identically distributed from P and (w ) a stick breaking k k≥1 0 k k≥1 process, so if all (v ) are idependent and identically distributed from the beta(1,c) then k k≥1 w = v and for k > 1, w = v (1−v ), and introduce dependence through the weights 1 1 k k l<k l and–or the atoms. In this direction, the most common dependent process is the Hierarchical Q DP (Teh et al., 2005) which induces dependence between a collection of random probability measures by letting the atoms to be common. A classical example for the use of dependent DP’s is the Bayesian nonparametric regression problem where a random probability measure P is constructed for each covariate z and is of the type z ∞ P = w (z)δ , z k θk(z) k=1 X where (w (z),θ (z)) are collection of processes indexed by the z−space. Extensions of DP k k models to dependent DP models can be found in DeIorio et al. (2004), Griffin and Steel (2006), and Dunson and Park (2008). A survey in the construction of dependent nonparametric priors can be found in Foti and Williamson (2014). Recently there has been growing interest for the use of simpler random probability measures than the DP which although simpler are yet competent for Bayesian nonparametric density es- timation. The geometric stick breaking (GSB) random probability measure (Fuentes–Garc´ıa, et al. 2010) has been used for density estimation purposes and has been shown to provide an efficient alternative to DP based mixture models. Some recent papers extend this nonparamet- ric prior to a dependent nonparametric prior. In the direction of covariate dependent processes, GSB processes have been seen to provide an adequate model to the traditional dependent DP. For example, for Bayesian regression, Fu´entes–Garcia et al. (2009) propose a covariate depen- dent process based on random probability measures drawn from a GSB process. M´ena et al. (2011)used GSBrandom probability measures inorder to construct a purely atomic continuous time measure-valued process useful for the analysis of time series data. In this case the covari- ate z ≥ 0 denotes the time that each observation is (discretely) recored and conditionally on z i each observation is assumed to be drawn from a time-dependent nonparametric mixture model based on GSB processes. However, to the best of our knowledge random probability measures 2 drawn from a GSB process for modelling related density functions when samples from each density function are available have not been developed in the literature. In this work we are going to construct pairwise dependent random probability measures based on GSB processes. That is, we are going to model a finite collection of m random distribution functions (P ) , where each P is a GSB random probability measure, such j 1≤j≤m j that there is a unique common component for each pair (Pj,Pj′) with j 6= j′. We are going to use these measures in the context of GSB mixture models, generating a collection of m GSB pairwise dependent random densities (f (x)) . That is we are going to construct for any j 1≤j≤m finite m > 1 (f ,...,f ), 1 m where each f , marginally is a random density function j f (x) = K(x|θ)P (dθ), j j ZΘ thus generalizing the GSB priors to a multivariate prior setting for partially exchangeable observations. In the problem considered here, these random density functions (f ) are thought to be j 1≤j≤m related and we aim to share information between groups to improve estimation of each density especially for those densities f for which the corresponding sample size n is small. In this j j direction, the main references include the work of Mu¨ller et al. (2014), Bulla et al. (2009), Kolossiatis et al. (2013) and Griffin et al. (2013). All these models have been proposed for the modeling of an arbitrary but finite number of random distribution functions, via a common part and an index specific idiosyncratic part so that for 0 < p < 1 we have j P = p P +(1−p )P∗, j j 0 j j where P is the common component to all other distributions and {P∗ : j = 1,...,m} are the 0 j idiosyncratic parts to each P , and P ,P∗ i∼id DP(c,P ). In Lijoi et al. (2014) a more complex j 0 j 0 normalized random probability measure based on the σ–stable process is used for modeling dependent mixtures. Although similar (all models coincide only for the m = 2 case), these models are different from our model which is based on pairwise dependence of a sequence of random measures (Hatjispyros et al. 2011, 2016). We are going to provide evidence through numerical experiments, that dependent GSB mix- ture models provide an efficient alternative to pairwise dependent DP (PDDP) priors. Firstly, we will randomize the existing PDDP model of Hatjispyros et al. (2011, 2016), by imposing gamma priors on its concentration masses, and then we will conduct a-priori synchronized den- sity estimation comparison studies between the randomized PDDP model (rPDDP) and the pairwise dependent GSB process (PDGSBP) model using synthetic and real data examples. This paper is organized as follows. In section 2 we will demonstrate the construction of dependent random densities based on a set of general random discrete distributions, using a 3 dependent model suggested by Hatjispyros et al. (2011). We demonstrate how specific choices of latent random variables can recover the model of Hatjispyros et al. and the dependent GSB model introduced in this paper. These latent variables will form the basis of a Gibbs sampler for posterior inference, given in Section 3. In Section 4 we resort to simulation. We provide a comparison study between the dependent DP model of Hatjispyros et al. with our dependent GSB model. For the dependent DP mixture model, we are going to model each P as a random probability measure drawn from DP(c ,P ). That is, P ,1 ≤ j ≤ l ≤ m are jl jl 0 jl mutually independent Dirichlet processes. 2. Preliminaries. We consider an infinite real valued process {X : 1 ≤ j ≤ m, i ≥ 1} defined over a ji probability space (Ω,F,P), that is partially exchangeable as in (1). Let P denote the set of probability measures over R; de Finetti proved that there exists a probability distribution Π over Pm, that satisfies P{X ∈A :1≤j ≤m,1≤i≤n } ji ji j = P{X ∈A :1≤j ≤m,1≤i≤n |Q ,...,Q }Π(dQ ,...,dQ ) ji ji j 1 m 1 m ZPm m = P{X ∈A :1≤i≤n |Q }Π(dQ ,...,dQ ) ji ji j j 1 m ZPmj=1 Y m nj = Q (A ) Π(dQ ,...,dQ ). j ji 1 m ZPmj=1(i=1 ) Y Y The de Finetti measure Π represents a prior distribution over partially exchangeable observa- tions. We start off by describing the model as it was in Hatjispyros et al. (2011) and then proceed to the specific details for the case of GSB random measures. So, we have the set of random density functions generated via m f (x) := f (x|G ,1 ≤ l ≤ m) = p g (x|G ), 1 ≤ j ≤ m, (2) j j jl jl jl jl l=1 X where m p = 1 a.s., the random densities satisfy g = g and are independent mixtures of l=1 jl jl lj GSB processes under the slightly altered definition P ∞ P = q δ with q = λ (1−λ )k−1, λ ∼ h(·;ξ ), θ i∼id P , (3) jl jlk θjlk jlk jl jl jl jl jlk 0 k=1 X where h is a parametric density supported over the interval (0,1) and depending on some parameter ξ ∈ Ξ, and P the central measure for which E(G (A)) = P (A) for all Borel sets jl 0 jl 0 A of R. Then g (x) := g (x|G ) = K(x|θ)G (dθ), jl jl jl jl ZΘ for some kernel density K(·|·) and {G : 1 ≤ j, l ≤ m} form a matrix G of random distri- jl butions with G = G for j > l and each other element is an independent GSB process. The jl lj 4 random densities (f ) are dependent mixtures of the dependent random measures j 1≤j≤m m Q = p G , 1 ≤ j ≤ m. (4) j jl jl l=1 X In matrix notation Q = (p⊗G)1, (5) where p = (p ) is the matrix of random selection weights, and p⊗G is the Hadamard product jl of the two matrices defined as (p⊗G) = p G . By letting 1 to denote the m×1 matrix of jl jl jl ones it is that the jth element of vector Q is given by equation (4). Following a univariate construction of geometric slice sets (Fuentes–Garc´ıa et al. 2010), we define the stochastic variables N = (N ) for 1 ≤ i ≤ n and 1 ≤ j ≤ m, where N is an almost ji j ji surely finite random variable of mass f possibly depending on parameters, associated with the N sequential slice set S = {1,...,N }. Following Hatjispyros et al. (2011, 2016) we introduce: ji ji 1. The GSB mixture selection variables δ = (δ ); for an observation x that comes from f , ji ji j δ selects the GSB mixture g (x) that the observation came from. ji ji 2. The clustering variables d = (d ); for an observation x that comes from f , given δ , ji ji j ji d allocates the component of the GSB mixture g (x) that x came from. ji jδji ji Proposition 1. Suppose that the clustering variables (d ) conditionally on the slice variables ji (N ) are having the discrete uniform distribution over the sets (S ) that is d |N ∼ DU(S ), ji ji ji ji ji then m r 1 f (x ,N = r) = p f (r;λ ) K(x |θ ), (6) j ji ji jl N jl ji jlk r l=1 k=1 X X and 1 f (x ,N = r,d = k|δ = l) = f (r;λ )I(k ≤ r)K(x |θ ). (7) j ji ji ji ji N jl ji jlk r Proof. Starting from the N -augmented random densities we have ji m m f (x ,N =r) = f (x ,N =r,δ =l)= p f (x ,N =r|δ =l) j ji ji j ji ji ji jl j ji ji ji l=1 l=1 X X m ∞ = p f (x ,N =r,d =k|δ =l) jl j ji ji ji ji l=1 k=1 X X m ∞ = p f (N =r|δ =l) f (d =k|N =r)f (x |d =k,δ =l). jl j ji ji j ji ji j ji ji ji l=1 k=1 X X Because f (N = r|δ = l) = f (r;λ ) and f (x |d = k,δ = l) = K(x |θ ), the last j ji ji N jl j ji ji ji ji jlk equation gives m ∞ 1 f (x ,N = r) = p f (r;λ ) I(k ≤ r)K(x |θ ) j ji ji jl N jl ji jlk r l=1 k=1 X X m r 1 = p f (r;λ ) K(x |θ ). jl N jl ji jlk r l=1 k=1 X X 5 Augmenting further with the variables d and δ yields ji ji 1 f (x ,N = r,d = k,δ = l) = p f (r;λ )I(k ≤ r)K(x |θ ). j ji ji ji ji jl N jl ji jlk r Because P(δ = l) = p , the last equation leads to equation (7) and the proposition follows. (cid:3) ji jl The following proposition gives a multivariate analogue of equation (2) in Fuentes–Garc´ıa, et al. (2010): Proposition 2. Given the random set S , the random functions in (2) become finite mixtures ji of a.s. finite equally weighted mixtures of the K(·|·) probability kernels, that is m r 1 f (x |N = r) = W(r;λ ) K(x |θ ), (8) j ji ji jl ji jlk r l=1 k=1 X X with p f (r;λ ) jl N jl W(r;λ ) = . jl ml′=1pjl′fN(r;λjl′) Proof. Marginalizing the joint of x and N with respect to x we obtain ji Pji ji m f (N = r) = p f (r;λ ). j ji jl N jl l=1 X Then dividing equation (6) with the probability that N equals r we obtain equation (8). (cid:3) ji We note that the one-dimensional model introduced in Fuentes–Garc´ıa, et al. (2010) under our notation has the representation r 1 f (x |N = r,δ = l) = K(x |θ ). j ji ji ji ji jlk r k=1 X 3. The Model. Marginalizing (7) with respect to (N ,d ) we obtain ji ji ∞ ∞ 1 f (x |δ = l) = f (r;λ ) K(x |θ ). (9) j ji ji N jl ji jlk r ! k=1 r=k X X Thequantityinsidetheparenthesisontheright-handsideofthepreviousequationisf (d |δ = j ji ji l). Following Fuentes–Garc´ıa, et al. (2010), we substitute f (r;λ ) with the negative binomial N jl distribution NB(r;2,λ ), i.e. jl f (r;λ ) = rλ2(1−λ )r−1I(r ≥ 1), (10) N jl jl jl 6 then equation (9) becomes ∞ f (x |δ = l) = q K(x |θ ) with q = λ (1−λ )k−1, j ji ji jlk ji jlk jlk jl jl k=1 X and the f random densities take the form of a finite mixture of GSB mixtures j m ∞ f (x ) = p q K(x |θ ). j ji jl jlk ji jlk l=1 k=1 X X We denote the set of observations along the m groups as x = (x ) and with x the set of ji j observations in the jth group. The three sets of latent variables in the jth group will be denoted as N for the slice variables, d for the clustering variables, and finally δ for the set of j j j GSB mixture allocation variables. From now on, we are going to leave the auxiliary variables unspecified; especially for δ we use the notation ji δ = (δ1,...,δm) ∈ {e ,...,e } with P(δ = e ) = p , ji ji ji 1 m ji l jl where e denotes theusual basisvector having its onlynonzero component equal to1 atposition l l. Hence, for a sample of size n from f , a sample of size n from f , etc., a sample of size n 1 1 2 2 m from f we can write the full likelihood as a multiple product: m m f(x,N,d|δ) = f(x ,N ,d |δ ) j j j j j=1 Y m nj m = I(d ≤ N ) λ2(1−λ )Nji−1K(x |θ ) δjli . ji ji jl jl ji jldji j=1i=1 l=1 YY Y(cid:8) (cid:9) In a hierarchical fashion, using the auxiliary variables, we have for j = 1,...,m and i = 1,...,n , j m x ,N |d ,δ ,(θ ) ,λ i∼nd λ2(1−λ )Nji−1K(x |θ ) δjriI(N ≥ d ) ji ji ji ji jrδji 1≤r≤m jδji jl jl ji jrdji ji ji r=1 Y(cid:8) (cid:9) ind d |N ∼ DU(S ), P(δ = e ) = p ji ji ji ji l jl q = λ (1−λ )k−1, θ i∼id P , k ∈ N, jik ji ji jik 0 3.1 The PDGSBP Covariance. Inthissection wefind thecovariancebetween f (x) andf (x). First weprovide thefollowing j i lemma. Lemma 1. Let g (x) = K(x|θ)G(dθ) be a random density, with G = λ ∞ (1−λ)j−1δ G Θ j=1 θj iid and θ ∼ P , then j 0 R P 2 1 E[g (x)2] = λ K(x|θ)2P (dθ)+2(1−λ) K(x|θ)P (dθ) . G 0 0 2−λ (cid:18) (cid:19)( ZΘ (cid:18)ZΘ (cid:19) ) 7 Proof. Because g (x) = λ ∞ (1−λ)j−1K(x|θ ), we have G j=1 j P ∞ 2 E g (x)2 = λ2E (1−λ)j−1K(x|θ ) G j   ! (cid:8) (cid:9)  Xj=1  ∞ ∞ k−1 = λ2 (1−λ)2j−2E K(x|θ )2 +2 (1−λ)j+k−2E[K(x|θ )K(x|θ )] j j k ( ) j=1 k=2 j=1 X (cid:2) (cid:3) XX ∞ ∞ k−1 = λ2 (1−λ)2j−2E K(x|θ)2 +2 (1−λ)j+k−2E[K(x|θ)]2 ( ) j=1 k=2 j=1 X (cid:2) (cid:3) XX 1 1−λ = λ2 E K(x|θ)2 +2 E[K(x|θ)]2 , λ(2−λ) λ2(2−λ) (cid:26) (cid:27) (cid:2) (cid:3) (cid:3) which gives the desired result. Proposition 3. It is that Cov(f (x),f (x)) = p p Var K(x|θ)G (dθ) , j i ji ij ji (cid:18)ZΘ (cid:19) with λ Var K(x|θ)G (dθ) = ji Var(K(x|θ)). ji 2−λ (cid:18)ZΘ (cid:19) ji Proof. The random densities f (x) = m p g (x) and f (x) = m p g (x) depend to each i l=1 il il j l=1 jl jl other through the random measure G , therefore jPi P E[f (x)f (x)] = E[E(f (x)f (x)|G )] = E{E[f (x)|G ]E[f (x)|G ]}, (11) i j i j ji i ji j ji and E[f (x)|G ] = p E[g (x)]+p g (x) = (1−p )E[K(x|θ)]+p g (x) j ji jl jl ji ji ji ji ji l6=i X E[f (x)|G ] = p E[g (x)]+p g (x) = (1−p )E[K(x|θ)]+p g (x). i ji il il ij ji ij ij ji l6=j X Substituting back to equation (11) one obtains E[f (x)f (x)] = (1−p p )E[K(x|θ)]2 +p p E g (x)2 . i j ij ji ij ji ji (cid:2) (cid:3) Using lemma 1, the last equation becomes λ p p E[f (x)f (x)] = ji ji ij E[K(x|θ)2]−E[K(x|θ)]2 +E[K(x|θ)]2, i j 2−λ ji (cid:8) (cid:9) or that λ p p ji ji ij Cov(f (x),f (x)) = Var(K(x|θ)). j i 2−λ ji 8 The desired result, comes from the fact that λ 2(1−λ ) Var K(x|θ)G (dθ) = ji E[K(x|θ)2]+ ji E[K(x|θ)]2 −E[K(x|θ)]2 ji 2−λ 2−λ (cid:18)ZΘ (cid:19) (cid:26) ji ji (cid:27) λ = ji E[K(x|θ)2]−E[K(x|θ)]2 . 2−λ ji (cid:0) (cid:1) (cid:3) Suppose now that (fD(x)) and (fG(x)) are two collections of m DP and GSB j 1≤j≤m j 1≤j≤m pairwise dependent random densities respectively, i.e. fD(x) = m p g (x|P ) and fG(x) = j l=1 jl jl jl j m p g (x|G ). In Hatjispyros et al. 2011 it is shown that l=1 jl jl jl P P Cov(fD(x),fD(x)) = pijpji Var(K(x|θ)). j i 1+c ji The latter equation combined with proposition 3 gives the following: Proposition 4. For λ = (1+c )−1 it is that ji ji Cov(fG(x),fG(x)) < Cov(fD(x),fD(x)). j i j i 4. The PDGSBP Gibbs Sampler. In this section we are going to describe the PDGSBP Gibbs sampler for estimating the model. The details for the sampling algorithm of the PDDP model can be found in Hatjispyros et al. (2011, 2016). At each iteration we will sample variables, θ ,1 ≤ j ≤ l ≤ m, 1 ≤ k ≤ N∗, jlk d ,N ,δ ,1 ≤ j ≤ m, 1 ≤ i ≤ n , ji ji ji j p ,1 ≤ j ≤ m,1 ≤ l ≤ m, jl with N∗ = max N almost surely finite. j,i ji 1. For the locations of the random measures for k = 1,...,d∗ where d∗ = max d , it is that j,i ji nj nl K(x |θ )I(δji=el,dji=k) K(x |θ )I(δli=ej,dli=k) l > j, ji jlk li jlk  i=1 i=1 f(θ |···) ∝ f(θ ) Y Y jlk jlk  nj  K(x |θ )I(δji=ej,dji=k) l = j. ji jjk i=1 Y   If N∗ > d∗ we sample additional locations θjl,d∗+1,...,θjl,N∗ independently from the prior. 2. Here we sample the allocation variables d and the mixture component indicator variables ji δ as a block. For j = 1,...,m and i = 1,...,n , we have ji j P(d = k,δ = e |N = r,···) ∝ p K(x |θ )I(l ≤ m)I(k ≤ r). ji ji l ji jl ji jlk 9 3. The slice variables N have full conditional distributions given by ji P(N = r|δ = e ,d = l,···) ∝ (1−λ )rI(r ≥ l), ji ji l ji jl which are truncated geometric distributions over the set {l,l+1,...}. 4. The full conditional for j = 1,...,m for the selection probabilities p = (p ,...,p ), under j j1 jm a Dirichlet prior f(p |a ) ∝ m pajl−1, with hyperparameter a = (a ,...,a ), is Dirichlet j j l=1 jl j j1 jm Q m f(p |···) ∝ pajl+Pni=l1I(δji=el)−1. j jl l=1 Y 5. Here we update the geometric probabilities (λ ) of the GSB measures. For 1 ≤ j ≤ l ≤ m, jl it is that nj nl λ2(1−λ )Nji−1 I(δji=el) λ2(1−λ )Nli−1 I(δli=ej) l > j jl jl jl jl  i=1 i=1 f(λjl|···) ∝ f(λjl)Ynj (cid:8)λ2 (1−λ )Nji−1(cid:9)I(δji=ej)Y(cid:8) (cid:9) l = j. jj jj i=1 Y(cid:8) (cid:9)   To complete the model, we assign priors to the geometric probabilities. For a fair comparison between the two models, we apply λ = (1+c )−1 transformed priors. So, by placing gamma jl jl priors c ∼ G(a ,b ) over the concentration masses c of the PDDP model, we have jl jl jl jl f(λ ) = T G(λ |a ,b ) ∝ λ−(ajl+1)e−bjl/λjl(1−λ )ajl−1I(0 < λ < 1). jl jl jl jl jl jl jl In the appendix, we give the full conditionals for λ ’s, their corresponding embedded Gibbs jl sampling schemes, and the sampling algorithm for the concentration masses. 7. The Complexity of the rPDDP and PDGSBP samplers The main difference between the two samplers in terms of execution time, comes from the blocked sampling of the clustering and the mixture indicator variables d and δ . ji ji The rPDDP model: Thestatespaceofthevariable(d ,δ )conditionallyontheslicevariable ji ji u is ji (d ,δ )(Ω) = ∪m A (u )×{e } , ji ji l=1 wjl ji l whereA (u ) = {r ∈ N : u < w }isthea.s.(cid:0)finiteslicesetco(cid:1)rresponding totheobservation wjl ji ji jlr x . AteachiterationoftheGibbssampler, wehavem(m+1)/2vectorsofstick-breaking weights ji w , each of length N∗; where N∗ ∼ 1+Poisson(−c logu∗) with c being the concentration jl jl jl jl jl jl parameter of the Dirichlet process P and u∗ being the minimum of the slice variables in jl jl densities f and f . Algorithm 1 gives the blocked sampling procedure of the clustering and j l mixture indicator variables. An illustration of the effect of the slice variable u is given in ji figure 1(a). 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.