ebook img

A semiparametric approach to mixed outcome latent variable models: Estimating the association between cognition and regional brain volumes PDF

0.73 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A semiparametric approach to mixed outcome latent variable models: Estimating the association between cognition and regional brain volumes

TheAnnalsofAppliedStatistics 2013,Vol.7,No.4,2361–2383 DOI:10.1214/13-AOAS675 (cid:13)c InstituteofMathematicalStatistics,2013 A SEMIPARAMETRIC APPROACH TO MIXED OUTCOME LATENT VARIABLE MODELS: ESTIMATING THE ASSOCIATION 1 BETWEEN COGNITION AND REGIONAL BRAIN VOLUMES 4 1 0 By Jonathan Gruhl∗, Elena A. Erosheva∗ and Paul K. Crane† 2 University of Washington∗ and Harborview Medical Center† n a Multivariate data that combine binary, categorical, count and J continuous outcomes are common in the social and health sciences. 3 WeproposeasemiparametricBayesianlatentvariablemodelformul- 1 tivariatedataof arbitrarytypethat doesnotrequirespecification of conditional distributions. Drawing on the extended rank likelihood ] P method by Hoff [Ann. Appl. Stat. 1 (2007) 265–283], we develop a A semiparametric approach for latent variable modeling with mixed . outcomes and propose associated Markov chain Monte Carlo esti- t a mation methods. Motivated by cognitive testing data, we focus on t bifactor models, a special case of factor analysis. We employ our s [ semiparametric Bayesian latent variablemodeltoinvestigate theas- sociation between cognitive outcomes and MRI-measured regional 1 brain volumes. v 8 2 1. Introduction. Multivariate outcomes are common in medical and so- 7 cial studies. Latent variable models provide means for studying the inter- 2 dependencies among multiple outcomes perceived as measures of a common . 1 concept or concepts. These outcomes in many cases may be of mixed types 0 in the sense that some may be binary, others may be counts, and yet oth- 4 1 ers may be continuous. Most common latent variable models, however, have v: been developed for outcomes of the same type. For example, standard fac- i tor analysis models [Bartholomew, Knottand Moustaki (2011)] assumenor- X mally distributed outcomes, item response theory models [van der Linden r a andHambleton(1997)]aretypically appliedtobinaryresponses,andgraded response [Samejima (1969)] and generalized partial credit models [Muraki (1992)] have been developed specifically for ordered categorical data. Received March 2012; revised July 2013. 1Supported in part by Grant R01 AG 029672 from the National Institute on Aging. Data collection was supported by Grant P01 AG12435. Key words and phrases. Latent variable model, Bayesian hierarchical model, extended rank likelihood, cognitive outcomes. This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Applied Statistics, 2013,Vol. 7, No. 4, 2361–2383. This reprint differs from the original in pagination and typographic detail. 1 2 J. GRUHL,E. A.EROSHEVA ANDP. K.CRANE Existing research on latent variable models for mixed outcomes is largely focused on two parametric approaches. The first approach is to specify a different generalized linear model for each outcome that best suits its type (e.g., binary, count, ordered categorical) and to include shared latent vari- ables as predictors that induce dependence among the outcomes. Sammel, Ryan and Legler (1997) and Moustaki and Knott (2000) developed this ap- proach, referred to as generalized latent trait modeling, employing the EM algorithm for estimation. InaBayesian framework, Dunson(2003)extended thegeneralizedlatenttraitmodelstoallowforrepeatedmeasurements,serial correlations inthelatentvariablesandindividual-specificresponsebehavior. The second approach to analyzing mixed discrete and continuous outcomes with latent variables is the underlying latent response approach where ob- served mixed outcomes are assumed to have underlying latent responses that are continuous and normally distributed. Introduction of the continu- ous latent responses enables one to proceed with the analysis as one might for any multivariate normal data. To map the underlying latent responses to observed mixed outcomes, one must estimate threshold parameters. In this context, Shi and Lee (1998) employed Bayesian estimation for factor analysis with polytomous and continuous outcomes. However, as noted by Dunson (2003), the underlying latent response approach is limited in that some observed outcome types such as counts may not be easily linked to underlying continuous responses. Generalized latent trait models can beextended to account for additional types of outcomes [Skrondal and Rabe-Hesketh (2004)], including censored and duration outcomes. However, accommodating many possible types of outcomes that one may encounter in practice may be time-consuming, sus- ceptible to misspecification and of little interest in its own right. Ourmotivatingexampleisadatasetfromalargemulticenterstudycalled theSubcorticalIschemicVascular Dementia (SIVD)ProgramProjectGrant [Chuietal.(2006)].TheSIVDstudycollectsserialimagingandneuropsycho- logical data from a large group of study participants. One major study goal is to further elucidate relationships between brain structure (as measured by MRI imaging) and function (as measured by performance on neuropsy- chological tests). In particular, investigators were especially interested in cerebrovascular disease as manifested on MRI. Thus, we focus our analysis on one particular cognitive domain, namely, executive functioning, that is thought to be particularly susceptible to cerebrovascular disease [Hachinski et al. (2006)]. Executive functioning refers to higher order cognitive tasks (“executive” tasks) such as working memory, set shifting, inhibition and other frontal lobe-mediated functions. The SIVD study follows individuals longitudinally until death, collecting results from repeated neuropsychologi- cal testing and brain imaging. In this paper, we are concerned with relating individual levels of executive functioning at the initial SIVD study visit to the concurrent MRI-measured amount of white matter hyperintensities A SEMIPARAMETRICLATENTVARIABLEMODEL 3 (WMH) located in the frontal lobe of the brain. Executive functioning ca- pabilities may be particularly sensitive to white matter hyperintensities in this region [Kuczynski et al. (2010)]. The SIVD neuropsychological battery includes 21 distinct indicators that can be conceptualized as measuring some facet of executive functioning. We refer to the executive functioning-related outcomes as “indicators,” as they includesomeelements that arescales by themselves andother elements that are subsets of scales. Observed responses to the SIVD neuropsychological tests items are diverse in their types. In addition to binary and ordered categorical outcomes, the SIVD neuropsychological indicators include count as well as censored count data. In this paper, we develop a semiparametric approach to mixed outcome latent variable models that avoids specification of outcome conditional dis- tributions given the latent variables. Following the extended rank likelihood approach of Hoff (2007), we start by assuming the existence of continuous latent responses underlying each observed outcome. We then make use of the fact that the ordering of the underlying latent responses is assumed to be consistent with the ordering of the observed outcomes. This approach is similar to that of Shi and Lee (1998) but does not require estimating un- known thresholds.When thedataare continuous,our approach is analogous to the use of a rank likelihood [Pettitt (1982)]. When the data are discrete, our approach relies on the assumption that the ordering of the latent re- sponses is consistent with the partial ordering of the observed outcomes. Hoff (2007) introduced this general approach for estimating parameters of a semiparametric Gaussian copula model with arbitrary marginal distribu- tions and designated the resulting likelihood as the extended rank likelihood. Dobra and Lenkoski (2011) applied the extended rank likelihood methods to the estimation of graphical models for multivariate mixed outcomes. Motivated by SIVD cognitive testing data, we specify a bifactor latent structure for the semiparametric latent variable model. The bifactor struc- ture assumes existence of a general factor and some secondary factors that account for residual dependency among groups of items [Holzinger and Swineford (1937), Reise, Morizot and Hays (2007)]. The bifactor model is a useful tool for modeling the neuropsychological battery used in the SIVD study, as it retains a single underlying executive functioning factor while accounting for local dependencies among groups of related items. The orig- inal idea for this work was presented earlier by [Gruhl, Erosheva and Crane (2010, 2011)]. Murray et al. (2013) recently proposeda closely related factor analytic model for mixed data. The remainder of this paper is organized as follows. We review the semi- parametricGaussiancopulamodelandintroducethenewsemiparametricla- tentvariablemodelinSection2.WedevelopBayesianestimationapproaches for the semiparametric latent variable model in Section 3. We extend the 4 J. GRUHL,E. A.EROSHEVA ANDP. K.CRANE modelhierarchicallytoincludecovariates inSection4.InSection5webriefly demonstrate the performance of the model using simulated data before fo- cusing on the analysis of the SIVD data. 2. Semiparametric latent variable model for mixed outcomes. 2.1. Model formulation. Let i=1,...,I denote the ith participant, and letj=1,...,J denotethejthoutcome.Lety denotetheobservedresponse ij of participant i on outcome j with marginal distribution F , then y can j ij be represented as y =F−1(u ), where u is a uniform (0,1) random vari- ij j ij ij able. An equivalent representation is y =F−1[Φ(z )], where Φ(·) denotes ij j ij the normal CDF and z is distributed standard normal. The unobserved ij variables z are latent responses underlying each observed response y . As- ij ij suming that the correlation of z with z , 1≤j <j′ ≤J, is specified by ij ij′ the J ×J correlation matrix C, the Gaussian copula model is (1) z ,...,z |C∼i.i.d. N(0,C), 1 n (2) y =F−1[Φ(z )]. ij j ij Here, z is the J-length vector of latent responses z for participant i. i ij Insomeanalyses,theprimaryfocusis ontheestimation ofthecorrelation matrix C and not the estimation of the marginal distributions F ,...,F . 1 J If the latent responses z were known, estimation of C could proceed using ij standard methods. Although the latent responses are unknown, Hoff (2007) noted that we do have rank information about the latent responses through the observed responses because y <y implies z <z . If we denote the ij ij′ ij ij′ full set of latent responses by Z=(z ,...,z )T and the full set of observed 1 I responses by Y=(y ,...,y )T, then Z∈D(Y), where 1 I D(Y)= Z∈RI×J:max{z :y <y }<z <min{z :y <y }∀i,j . kj kj ij ij kj ij kj n k k o (3) Onecan then constructa likelihood for C that does notdependon thespec- ification of the marginal distributions F ,...,F by focusing on the proba- 1 J bility of the event, Z∈D(Y): Pr(Z∈D(Y)|C,F ,...,F )= p(Z|C)dZ 1 J ZD(Y) (4) =Pr(Z∈D(Y)|C). Equation (3) enables the following decomposition of the density of Y: p(Y|C,F ,...,F ) 1 J =p(Y,Z∈D(Y)|C,F ,...,F ) 1 J A SEMIPARAMETRICLATENTVARIABLEMODEL 5 =Pr(Z∈D(Y)|C,F ,...,F )×p(Y|Z∈D(Y),C,F ,...,F ) 1 J 1 J =Pr(Z∈D(Y)|C)×p(Y|Z∈D(Y),C,F ,...,F ). 1 J This decomposition uses thefact that theprobability of theevent Z∈D(Y) does notdependon the marginal distributions F ,...,F and that the event 1 J Z∈D(Y) occurs whenever Y is observed. By using Pr(Z∈D(Y)|C) as the likelihoodfunction,thedependencestructureofY canbeestimatedthrough C without any knowledge or assumptions about the marginal distributions. More details on the semiparametric Gaussian copula model can be found in Hoff (2007, 2009). In the context of latent variable modeling, the main interest is not just in estimating the correlations among observed variables C but in charac- terizing the interdependencies in multivariate observed responses through a latent variable model. Latent variable models, in turn, place constraints on the matrix of correlations among the observed responses and seek a more parsimonious description of the dependence structure. Factor analysis is the most common type of latent variable model with continuous latent variables and continuous outcomes. To develop a semiparametric approach for factor analysis with mixed outcomes, assume Q factors, let η bea vector of factor scores for individual i i and H=(η ,...,η )T be the I ×Q factor matrix. Let Λ denote the J × 1 I Q matrix of factor loadings. We define our semiparametric latent variable model as (5) η ∼N(0,I ), i Q (6) z |Λ,η ∼N(Λη ,I ), i i i J (7) y =g (z ). ij j ij Here, we define g (z )=F−1(Φ[z / 1+λTλ ]), where λ denotes the jth j ij j ij j j j q row of Λ and the marginal distribution F remains unspecified. Note that j the functions g (·) are nondecreasing and preserve the orderings. The model j given by equations (5)–(7) does not rely on the unrestricted correlation ma- trix C as does the Gaussian copula model. Assuming that a factor analytic model is appropriate for the data, it constrains the dependencies among the elements of z to be consistent with the functional form of I +ΛΛT. As i J a result, the proposed semiparametric latent variable model is a structured case of the semiparametric Gaussian copula model and can be viewed as a semiparametric form of copula structure analysis [Klu¨ppelberg and Kuhn (2009)]. The general framework of the semiparametric latent variable model given by equations (5)–(7) can be used for any special cases of factor analysis. In this paper, motivated by the substantive background information on the 6 J. GRUHL,E. A.EROSHEVA ANDP. K.CRANE SIVDcognitive testing data, wefocusonbifactor models.We definebifactor models as having a specific structure on the loading matrix, Λ, where each outcome loads on the primary factor and may load on one or more of the secondaryfactors[Reise,MorizotandHays(2007)].Mostcommonly,bifactor models are applied such that an outcome loads on at most one secondary factor. 2.2. Model identification. The lack of identifiability of factor analysis models that is due to rotational invariance is well known [Jo¨reskog (1969), Dunn (1973), Jennrich (1978), Anderson (2003)]. If we define new factor loadings and new factor scores by Λ˜ =ΛT and η˜ =T−1η , where T is an i i orthonormal Q×Q matrix, then the model (8) z |Λ,η ∼N(Λ˜η˜ ,I ) i i i J is indistinguishable from the model in equation (6). In the case where the covariance of η is not restricted to the identity matrix, any nonsingular i Q×Q matrix T results in the same indeterminacy. In this more general case, we must place Q2 constraints to prevent this rotational invariance. When we restrict the covariance of η to the identity matrix, this restric- i tion places 1Q(Q + 1) constraints on the model. We are then left with 2 1Q(Q−1) additional constraints to place on the model. We may satisfy 2 this requirement by assuming a bifactor structure with 1Q(Q−1) zeros in 2 the matrix of loadings Λ [Anderson (2003)]. While these restrictions may resolve rotational invariance, the issue of reflection invariance typically re- mains [Dunn (1973), Jennrich (1978)]. Reflection invariance results from the the fact that the signs of the loadings in any column in Λ may be switched. Thus,ifDisadiagonalmatrixof1’sand−1’sprecipitatingthesignchanges, HΛT =HDDΛT =H˜Λ˜T. Geweke and Zhou (1996) proposed an approach that addresses identifi- ability of factor models by constraining all upper diagonal elements in the matrix of factor loadings to zero and requiring all diagonal elements to be positive. This approach has been used successfully in Bayesian exploratory factor analysis [Ghosh and Dunson (2008, 2009), Lopes and West (2004)], but cannot be used with bifactor models because the placement of struc- tural zeros in most cases will be incompatible with fixing all upper diagonal elements of the matrix of loadings to zero. Congdon (2003) and Congdon (2006) suggested the use of a prior that would place additional constraints on the signs of some of the factor loadings to resolve the issue of reflection invariance. However, it has been shown that different choices of parameters forconstraintplacementcouldhaveaseriousimpactonmodelfitincomplex factor models [Millsap (2001)]. Thus, in our work, we rely on the relabeling algorithm proposed by Erosheva and Curtis (2011) to resolve reflection in- variance. Thisalgorithm relies on a decision-theoretic approach and resolves A SEMIPARAMETRICLATENTVARIABLEMODEL 7 thesign-switchingbehaviorinBayesianfactoranalysisinasimilarfashionto the relabeling algorithm introduced to address the label-switching problem in mixture models [Stephens(2000)]. Itdoes notrequiremaking preferential choices among variables for constraint placement. In the semiparametric latent variable model, unlike the standard factor analysis model, specific means and variances are not identifiable. Let (9) z˜ =µ +σ z , ij j j ij where µ and σ are the specific mean and variance for item j. Moreover, if j j Z˜ ∈RI×J denotes the matrix of elements z˜ and ij (10) D˜(Y)={Z˜:max{z˜ :y <y }<z˜ <min{z˜ :y <y }}, kj kj ij ij kj ij kj then (11) Pr(Z˜ ∈D˜(Y)|Λ,H,µ,Σ)=Pr(Z∈D(Y)|Λ,H). Thus, shifts in location and scale of the latent responses will not alter the probability of belonging to the set of feasible latent response values implied by orderings of the observed responses. As such, we set the specific means at µ=0 and the specific variances at Σ=I . J 3. Estimation. We employ a parameter expansion approach [Liu, Ru- bin and Wu (1998), Liu and Wu (1999)] for Markov chain Monte Carlo (MCMC) sampling, following the work of Ghosh and Dunson (2009) on effi- cient computation for Bayesian factor analysis. We found that this method outperforms a Gibbs sampling algorithm with standard semi-conjugate pri- orsforfactoranalysis[ShiandLee(1998),GhoshandDunson(2009)]inthat it reduces autocorrelation among the MCMC draws and results in greater effective sample sizes. 3.1. Parameter expansion approach. The central idea behind the pa- rameter expansion approach, using the terminology of Ghosh and Dunson (2009),istostartwithaworkingmodelthatisanoverparameterized version oftheinitialinferentialmodel.AfterproceedingthroughMCMCsampling,a transformation is used to relate the draws from the working model to draws fromtheinferentialmodel.Forourapplication,theoverparameterizedmodel is (12) z∗∼N(Λ∗η∗,Σ), i i (13) η∗∼N(0,Ψ), i whereΣandΨarediagonalmatricesthatarenolongerrestrictedtoidentity matrices. The latent responses z∗, the latent variables η∗ and the loadings i i Λ∗ are unidentified in this working model. The transformations from the 8 J. GRUHL,E. A.EROSHEVA ANDP. K.CRANE working model to the inferential model are then specified as η =Ψ−1/2η∗, i i (14) z =Σ−1/2z∗, i i Λ=Σ−1/2Λ∗Ψ1/2. Tosamplefromtheworkingmodel,wemustspecifypriorsforthediagonal elementsofΨandΣaswellasforΛ∗.Wespecifythesepriorsintermsofthe precisions ψ−2 and σ−2. In addition, we denote by λ∗′ the nonzero elements q j j of the jth row of Λ∗. The prior on λ′ is then induced through the priors on j ψ−2, σ−2 and λ∗′, rather than being specified directly. Our priors are q j j ψ−2∼Gamma(φ ,ν ), q ψ ψ (15) σ−2∼Gamma(φ ,ν ), j σ σ λ∗′∼N(m ,S ). j λ∗′ λ∗′ j j The structural zeros in the matrix of loadings Λ are specified in accor- dance with our substantive understanding of the research problem at hand. However, we must have enough zeros so that the model can be identified since we rely on the placement of these structural zeros to resolve rotational invariance [Jo¨reskog (1969), Dunn (1973), Jennrich (1978), Loken (2005)]. Formally, we specify the prior for these structural zero elements as (16) λ∗ ∼δ , jq 0 whereδ isadistributionwithits pointmassconcentrated at0.Weestimate 0 loadings with no additional constraints on their signs. As discussed in Sec- tion 2, we then deal with potential multiple modes of the posterior that are due to reflection invariance by applying the relabeling algorithm proposed by Erosheva and Curtis (2011). We now develop the parameter-expanded Gibbs algorithm for sampling factors H and loadings Λ. Because the extended rank likelihood Pr(Z∗ ∈ D(Y)|Λ∗,H∗,Σ) involves a complicated integral, any expressions involving it will be difficult to compute. We avoid having to compute this integral by drawing from the joint posterior of (Z∗,H∗,Λ∗,Σ,Ψ) via Gibbs sampling. Given Z∗ =z∗ and Z∗ ∈D(Y), the full conditional density of Λ∗ can be written as p(Λ∗|H∗,Z∗=z∗,Z∗∈D(Y),Σ)=p(Λ∗|H∗,Z∗=z∗,Σ) because the current draw values Z∗=z∗ imply Z∗∈D(Y). A similar sim- plification may be made with the full conditional density of H∗. Given Λ∗,H∗,Z∗ ∈D(Y),Σ and Z∗ , the full conditional density of z is (−i)(−j) ij A SEMIPARAMETRICLATENTVARIABLEMODEL 9 proportional to a normal density with mean (λ∗)Tη∗ and variance σ2 that j i j is restricted to theregion specified by D(Y).OurGibbs samplingprocedure for the working model proceeds according to the following steps: 1. Draw latent responses Z∗.Foreachiandj,samplez∗ fromatruncated ij normal distribution according to (17) z∗ ∼TN ((λ∗)Tη∗,σ2), ij (zl∗,zu∗) j i j where TN denotes truncated normal and z∗,z∗ define the lower and upper l u truncation points: (18) z∗=max{z∗ :y <y }, l kj kj ij k (19) z∗ =min{z∗ :y >y }. u kj kj ij k 2. Draw latent variables H∗. For each i, draw directly from the full con- ditional distribution for η∗ as follows: i η∗∼N((Ψ−1+(Λ∗)TΣ−1Λ∗)−1(Λ∗)TΣ−1z∗, i i (20) (Ψ−1+(Λ∗)TΣ−1Λ∗)−1). 3. Draw loadings Λ∗. For each j, draw from the full conditional distribu- tion for the nonzero loadings λ∗′: j λ∗′∼N((S−1+σ−2(H∗′)TH∗′)−1(S−1m +σ−2(H∗′)Tz∗), j λ′j j j j λ′j λ′j j j j (21) (S−1+σ−2(H∗′)TH∗′)−1), λ′ j j j j where H∗′ is a matrix comprised of the columns of H∗ for which there are j nonzero loadings in λ . j 4. Draw inverse covariance matrix Ψ−1. For each q, draw the diagonal element ψ−2 of Ψ−1 from the full conditional distribution: q 1 (22) ψ−2∼Gamma φ +I/2,ν + η2 , q (cid:18) ψ ψ 2 iq(cid:19) Xi where I is the number of participants. 5. Draw inverse covariance matrix Σ−1. For each j, draw the diagonal element σ−2 of Σ−1 from the full conditional distribution: j (23) σ−2∼Gamma(φ +I/2,ν + 1(z −Hλ )T(z −Hλ )). j σ σ 2 j j j j After discarding some number of initial draws as burn-in, we transform the remaining draws using equations (14) as part of a postprocessing step to obtain posterior draws from our inferential model. The only remaining 10 J. GRUHL,E. A.EROSHEVA ANDP. K.CRANE steps are to apply the relabeling algorithm of Erosheva and Curtis (2011), assess convergence and calculate posterior summaries for the parameters in the inferential model. Ourapplication of parameter expansion to factor analysis models induces prior distributions that are different from standard semi-conjugate priors in factor analysis. If the prior covariance matrix on λ′ is diagonal, the prior j induced on λ′ by the parameter expansion is the product of the normal jq distributionprioronλ∗′ andthesquarerootofaratioofgammadistribution jq priors on σ−2 and ψ−2. The ratio of gamma distributed random variables j q has a compoundgamma distribution which is a form of the generalized beta prime distribution with the shape parameter fixed to 1. If we integrate out this ratio, the prior for λ is a scale mixture of normals [West (1987)] with jq a compound gamma mixing density. Theinducedprioronthematrix Λ resultsincorrelations among elements ofthesamecolumnandelementsofthesamerow.AsdiscussedinGhoshand Dunson(2009),priordependenceinthefactorloadingsfortheqthfactorwill resultfromthesharedparameter ψ2 intheirrespective priordistributionsin q the parameter expanded formulation. Similarly, the shared parameter σ2 in j the prior distributions for the factor loadings related to the jth outcome in the parameter expanded approach will induce prior dependence across rows of the factor loadings matrix. In both the simulation and applied settings considered below, sensitivity analyses demonstrated thatposteriorestimates didnotchange meaningfully forvarioushyperparmatervaluesforσ−2 andψ−2.Forvaluesof ν =1,φ = j q σ σ 2,ν =1/2,φ =2, theinducedprioron thenonzeroelements of Λ willhave ψ ψ mean, variance and 2.5% and 97.5% quantiles close to that of a standard normal distribution. In their comparable model, Murray et al. (2013) use a shrinkage prior, explore its properties and develop a parameter-expanded approach with optimality properties. 3.2. Generatingreplicated dataforposterior predictivemodelchecks. Fol- lowingHoff(2007),weobtainposteriorpredictivedistributionsthatincorpo- rate uncertainty in estimation of the univariate marginal distributions. Let the superscript (m) denote the mth replicate from the mth posterior draw (m) of the parameter. We generate a new vector of latent responses, z , in I+1 addition to I sets drawn as part of the Gibbs sampling algorithm, according to (24) z(m) ∼N(0,I +Λ(m)(Λ(m))T). I+1 J (m) (m) (m) If z falls between two latent responses, z and z , that share the (I+1)j ij i′j (m) same value on the original data scale (i.e., y =y ), then y must also ij i′j (I+1)j

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.