ebook img

Monte Carlo Statistical Methods, Second Edition [Corrected Second Printing, 2005] (Instructor Solution Manual, Solutions) PDF

144 Pages·2004·3.029 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Monte Carlo Statistical Methods, Second Edition [Corrected Second Printing, 2005] (Instructor Solution Manual, Solutions)

Solution Manual for Selected Problems Monte Carlo Statistical Methods, 2nd Edition Christian P. Robert and George Casella (cid:13)c 2007 Springer Science+Business Media This manual has been compiled by Roberto Casarin, Universit´e Dauphine and Universit´a di Brescia, partly from his notes and partly from contributions from Cyrille Joutard, CREST, and Arafat Tayeb, Universit´e Dauphine, under the su- pervision of the authors. Later additions were made by Christian Robert — Second Version, June 27, 2007 Chapter 1 Problem 1.2 Let X (θ,σ2) and Y (µ,ρ2). The event Z >z is a.s. equivalent to ∼N ∼N { } X >z and Y >z . From the independence between X et Y, it follows { } { } P(Z >z)=P(X >z)P(Y >z) Let G be the c.d.f. of z, then 1 G(z)=[1 P(X <z)][1 P(Y <z)] − − − z θ z µ = [1 Φ − 1 Φ − − σ − ρ (cid:18) (cid:19)(cid:21)(cid:20) (cid:18) (cid:19)(cid:21) By taking the derivative and rearranging we obtain z θ z µ z µ z θ g(z)= 1 Φ − ρ−1ϕ − + 1 Φ − σ−1ϕ − − σ ρ − ρ σ (cid:20) (cid:18) (cid:19)(cid:21) (cid:18) (cid:19) (cid:20) (cid:18) (cid:19)(cid:21) (cid:18) (cid:19) Let X (α,β) and Z =X ω, then ∼W ∧ P(X >ω)= ∞αβxα 1e βxαdx − − Zω 2 Solution Manual and P(Z =ω)=P(>ω)= ∞αβxα 1e βxαdx − − Zω We conclude that the p.d.f. of Z is f(z)=αβzα 1e βzαI + ∞αβxα 1e βxαdx δ (z) − − z6ω − − ω (cid:18)Zω (cid:19) Problem 1.4 In order to find an explicit form of the integral ∞αβxα 1e βxαdx, − − Zω weusethechangeofvariabley =xα.Wehavedy =αxα 1dxandtheintegral − becomes ∞αβxα 1e βxαdx= ∞βe βydy =e βωα. − − − − Zω Zωα Problem 1.6 Let X ,...,X be an iid sample from the mixture distribution 1 n f(x)=p f (x)+...+p f (x). 1 1 k k Supposethatthemomentsuptotheorderk ofeveryf , j =1,...,k arefinite j and let m =E(Xi)= xif (x)dx, i,j j Z where X f . An usual approximation of the moments of f is j ∼ n 1 µ = Xi. i n j j=1 X Thus, we have the approximation k µ = p m , i j i,j j=1 X for i = 1,...,k. This is a linear system that gives (p ,...,p ) if the matrix 1 k M =[m ] is invertible. i,j Monte Carlo Statistical Methods 3 Problem 1.7 The density f of the vector Y is n 1 n 1 n y µ 2 f(yn,µ,σ)=(cid:18)σ√2π(cid:19) exp −2 i=1(cid:18) iσ− (cid:19) !, ∀yn ∈Rn,∀(µ,σ2)∈R×R∗+ X Thisfunctionisstrictlypositiveandthefirstandsecondorderpartialderiva- tives with respect to µ and σ exist and are positive. The same hypotheses are satisfied for the log-likelihood function n 2 1 y µ log(L(µ,σ,y ))= nlog√2π nlogσ i− n − − − 2 σ i=1(cid:18) (cid:19) X thus we can find the ML estimator of µ and σ2. The gradient of the log- likelihood is ∇log(L)=(∂∂lloogg((LL∂∂((µµσµ,,σσ,,yynn)))) =(−σ12nσP+ni=P1(ni=y1iσ(−y3i−µµ))2 if we equate the gradient to the null vector, log(L) = 0 and solve the ∇ resulting system in µ and σ, we find n 1 µˆ = y =y¯, i n i=1 X n 1 σˆ2 = (y y¯)2 =s2. i n − i=1 X Problem 1.8 LetX bear.v.followingamixtureofthetwoexponentialdistributions xp(1) E and xp(2). The density is E f(x)=πe x+2(1 π)e 2x. − − − The s-th non-central moment of the mixture is 4 Solution Manual + + E(Xs)= ∞πxse xdx+ ∞2xs(1 π)e 2xdx − − − Z0 Z0 =π +∞sxs 1e xdx xse xπ x=+∞ − − − Z0 −(cid:12) (cid:12)x=0 +(1 π) +∞sxs 1e(cid:12)(cid:12)2xdx (cid:12)(cid:12)xse x(1 π) x=+∞ − − − − Z0 −(cid:12) − (cid:12)x=0 =π +∞s(s 1)xs 2e xdx+(1(cid:12)(cid:12) π)1 +∞s(cid:12)(cid:12)(s 1)xs 2e 2xdx − − − − − − 2 − Z0 Z0 = ··· 1 π =πs!+ − s! 2s 1 π =πΓ(s+1)+ − Γ(s+1) 2s For s=1, π+1 E(X)= 2 gives the unbiased estimator π 1∗ π =2X¯ 1 1∗ − for π. Replacing the s-th moment by its empirical counterpart, for all s 0, ≥ we obtain n 1 1 π Xs =π+ − . nΓ(s+1) i 2s i=1 X Let us define n 1 α = Xs. s nΓ(s+1) i i=1 X noindent We find the following family of unbiased estimators for π 2sα s 1 πs∗ = 2s −1 . − Let us find the value s that minimises the variance of π ∗ s∗ 22s V(π )= V(α ), s∗ (2s 1)2 s − with 1 V(α )= V(Xs) s nΓ2(s+1) 1 = E(X2s) E2(Xs) nΓ2(s+1) − 1 πΓ(1+2(cid:2)s)+(1 π)Γ(1+(cid:3)2s)2 2s = − − π2 (1 π)22−2s 2π(1 π)2−s . n Γ2(1+s) − − − − − (cid:20) (cid:21) Monte Carlo Statistical Methods 5 The optimal value s depends on π. In the following table, the value of s is ∗ ∗ given for different values of π : ∗ π 0.1 0.3 0.5 0.7 0.9 s 1.45 0.9 0.66 0.50.36 ∗ Problem 1.11 (a) The random variable x has the Weibull distribution with density c xc 1e xc/αc. − − αc If we use the change of variables x = y1c and insert the Jacobian 1cy1c−1, we get the density c 1 1 1 y y1c yy−1ce−y/αc = e−y/αc αc c y αc which is the density of an xp(1/αc) distribution. E (b) The log-likelihood is based on xc logf(x α,c)=log(c) clog(α)+(c 1)log(x ) i i| − − i − αc Summing over n observations and differentiating with respect to α yields n n ∂ nc c logf(x α,c) = − + xc =0 ∂∂α i| α αc+1 i i=1 i=1 X X n c nc= xc ⇔ αc i i=1 X n 1 αc = xc ⇔ n i i=1 X 1 1 n c α= xc ⇔ n i! i=1 X while summing over n observations and differentiating with respect to c yields n ∂ logf(x α,c) i ∂c | i=1 X n n n 1 x = nlog(α)+ log(x ) xclog i =0 c − i − αc i α Xi=1 Xi=1 (cid:16) (cid:17) 6 Solution Manual (using the formula ∂(xα) =xαlog(x)), which gives ∂α n n n 1 x nlog(α) log(x )= xclog i . − i c − αc i α Xi=1 Xi=1 (cid:16) (cid:17) Therefore, there is no explicit solution for α and c. (c) Now, let the sample contain censored observations at level y . If x y , 0 i 0 ≤ we get x but if x > y , we get no observation, simply the information i i 0 that it is above y . This occurs with probability 0 [P(x >y )=1 P(x F(y ))]. i 0 i 0 − ≥ Therefore the new log-likelihood function is n log f(xi α,c)δi(1 F(y0))1−δi | − ! i=1 Y n = log(f(xi α,c)δi)+log(1 F(y0))1−δi | − Xi=1(cid:16) (cid:17) n xc yc δ log(c) clog(α)+(c 1)log(x ) i +(1 δ ) 0 i − − i − αc − i −αc i=1(cid:20) (cid:18) (cid:19) (cid:18) (cid:19)(cid:21) X As in part ((b)), there is no explicit maximum likelihood estimator for c: For α, n c c c δ − + xc +(1 δ ) yc =0 i α αc+1 i − i αc+1 0 Xi (cid:20) (cid:18) (cid:19) (cid:16) (cid:17)(cid:21) n 1 n yc n δ + δ xc+ 0 (1 δ )=0 ⇔− i αc i i αc − i i=1 i=1 i=1 X X X n n n 1 xc+yc (1 δ ) = δ ⇔ αc i 0 − i ! i i=1 i=1 i=1 X X X n xc+yc n (1 δ ) 1c α= i=1 i 0 i=1 − i ⇔ n δ (cid:20)P i=P1 i (cid:21) while, for c, P n 1 xc yc y δ log(α)+log(x ) i +(1 δ ) 0 log 0 =0, Xi=1" i c − i − αclog xαi ! − i (cid:18)−αc (cid:16)α(cid:17)(cid:19)# that is, (cid:0) (cid:1) n n n 1 δ log(α) δ + δ log(x )+ i i i i c − i=1 i=1 i=1 X X X 1 n x yc y n xcδ log i 0 log 0 (1 δ )=0 −αc i i α − αc α − i Xi=1 (cid:16) (cid:17) (cid:18) (cid:16) (cid:17)(cid:19)Xi=1 Monte Carlo Statistical Methods 7 Problem 1.13 The function to be maximized in the most general case is 17 mc,γa,αx αcc(xi−γ)c−1e−(xαi−cγ)c 1−e−(216α−cγ)c 1−e−(244α−cγ)c iY=1 (cid:16) (cid:17)(cid:16) (cid:17) where the first 17 x ’s are the uncensored observations. Estimates are about i .0078 for c and 3.5363 for α when γ is equal to 100. Problem 1.14 (a) We have df 2 θ x i (x θ, σ)= − − . dθ i| σ3π(1+ (xiσ−2θ)2)2 Therefore n n dL 2 θ x (θ, σ x)= − − i f(x θ, σ) . dθ | Xi=1σ3π(1+ (xiσ−2θ)2)2 j=Y1,j6=i j|    Reducingtothesamedenominatorimpliesthatasolutionofthelikelihood equation is a root of a 2n 1 degree polynomial. − Problem 1.16 (a) When Y ([1+eθ] 1), the likelihood is − ∼B 1 y eθ 1−y L(θ y)= . | 1+eθ 1+eθ (cid:18) (cid:19) (cid:18) (cid:19) If y = 0 then L(θ 0) = eθ . Since L(θ 0) 1 for all θ and | 1+eθ | ≤ limθ L(θ 0)=1, the maxim(cid:16)um l(cid:17)ikelihood estimator is . (b) Whe→n∞Y ,Y| ([1+eθ] 1), the likelihood is ∞ 1 2 − ∼B 1 y1+y2 eθ 2−y1−y2 L(θ y ,y )= . | 1 2 1+eθ 1+eθ (cid:18) (cid:19) (cid:18) (cid:19) Asinthepreviousquestion,themaximumlikelihoodestimatoris when ∞ y =y =0 and when y =y =1. 1 2 1 2 −∞ If y =1 and y =0 (or conversely) then 1 2 eθ L(θ 1,0)= . | (1+eθ)2 8 Solution Manual The log-likelihood is logL(θ 1,0)=θ 2log(1+eθ). | − It is a strictly concave function whose derivative is equal to 0 at θ = 0. Therefore, the maximum likelihood estimator is 0. Problem 1.20 (a) If X ∼ Np(θ, Ip), the likelihood is L(θ|x) = (2π1)p/2 exp(−kx−2θk2). The maximum likelihood estimator of θ solves the equations dL x θ x θ 2 (θ x)= i− i exp k − k , dθ | (2π)p/2 − 2 i (cid:18) (cid:19) for i = 1,...,p. Therefore, the maximum likelihood of λ = θ 2 is λˆ(x) = k k x 2. This estimator is biased as k k E λˆ(X) =E X 2 =E χ2(λ) =λ+p { } {k k } { p } implies a constant bias equal to p. (b) ThevariableY = X 2isdistributedasanoncentralchisquaredrandom k k variable χ2(λ). Here, the likelihood is p 1 y p−2 L(λy)= 4 Ip−2( λy)e−λ+2y . | 2 λ 2 (cid:16) (cid:17) p The derivative in λ is dL 1 y p−2 p 2 y dλ(λ|y)=−4(cid:16)λ(cid:17) 4 e−λ+2y (cid:20)(cid:18)1+ 2−λ (cid:19)Ip−22(pλy)−rλI′p−22(pλy)(cid:21). The MLE of λ is the solution of the equation dL(λy)/dλ = 0. Now, we | try to simplify this equation using the fact that ν 2ν I (z)=I (z) I (z) and I (z)=I (z) I (z). ν′ ν−1 − z ν ν−1 ν+1 − z ν Letν = p−22 andz =√λy.TheMLEofλisthesolutionof(1+2νλ)Iν(z)− yI (z)=0.So,theMLEofλisthesolutionofI (z) yI (z)=0. λ ν′ ν − λ ν+1 Therefore the MLE of λ is the solution of the implicit equation p p √λIp−2( λy)=√yIp( λy). 2 2 p p When y <p, the MLE of λ is p. (c) Weusethepriorπ(θ)= θ (p 1) onθ.AccordingtoBayes’formula,the − − k k posterior distribution π(θ x) is given by | Monte Carlo Statistical Methods 9 kx−θk2 f(xθ)π(θ) e− 2 π(θ x)= | . | f(xθ)π(θ)dθ ∝ θ p 1 | k k − The Bayes estimator of λ isRtherefore δπ(λ)=Eπ θ 2 x = Rpkθk3−pe−kx−2θk2dθ. {k k | } RRpkθk1−pe−kx−2θk2dθ Problem 1.22 R (a) Since L(δ,h(θ)) 0 by using Fubini’s theorem, we get ≥ r(π,δ) = L(δ,h(θ))f(xθ)π(θ)dxdθ | ZΘZX = L(δ,h(θ))f(xθ)π(θ)dθdx | ZX ZΘ = L(δ,h(θ))m(x)π(θ x)dθdx | ZX ZΘ = ϕ(π,δ x)m(x)dx, | ZX where m is the marginal distribution of X and ϕ(π,δ x) is the posterior | average cost. The estimator that minimizes the integrated risk r is therefore, for each x, the one that minmizes the posterior average cost and it is given by δπ(x)=argminϕ(π,δ x). δ | (b) The average posterior loss is given by : ϕ(π,δ x) =Eπ[L(δ,θ)x] | | =Eπ h(θ) δ 2 x || − || | =Eπ(cid:2) h(θ) 2 x +δ(cid:3)2 2<δ,Eπ[h(θ)x]> || || | − | A simple derivation show(cid:2)s that the(cid:3)minimum is attained for δπ(x)=Eπ[h(θ)x] . | (c) Take m to be the posterior median and consider the auxiliary function of θ, s(θ), defined as 1 if h(θ)<m s(θ)= − +1 if h(θ)>m (cid:26) Then s satisfies the propriety 10 Solution Manual m Eπ[s(θ)x]= π(θ x)dθ+ ∞π(θ x)dθ | − | | Z−∞ Zm = P(h(θ)<mx)+P(h(θ)>mx)=0 − | | Forδ <m, we have L(δ,θ) L(m,θ)= h(θ) δ h(θ) m from which − | − |−| − | it follows that δ m=(m δ)s(θ) if δ >h(θ) − − L(δ,θ) L(m,θ)= m δ =m δ if m<δ −  − − 2h(θ) (δ+m)>(m δ)s(θ) if δ <h(θ)<m  − − It turns out that L(δ,θ) L(m,θ)>(m δ)s(θ) which implies that − − Eπ[L(δ,θ) L(m,θ)x]>(m δ)Eπ[s(θ)x]=0. − | − | This still holds, using similar argument when δ > m, so the minimum of Eπ[L(δ,θ)x] is reached at δ =m. | Problem 1.23 (a) When X σ (0, σ2), 1 a(1, 2), the posterior distribution is | ∼N σ2 ∼G π σ 2 X f(xσ)π(σ 2) − − | ∝ | (cid:0) (cid:1) 1e−(x2σ/22+2) ∝ σ =(σ2)23−1e−(x2σ/22+2), which means that 1/σ2 a(3, 2+ x2). The marginal distribution is ∼G 2 2 3 x2 −2 m(x)= f(xσ)π(σ 2)d(σ 2) +2 , − − | ∝ 2 Z (cid:18) (cid:19) that is, X (2,0,2). ∼T (b) When X λ (λ), λ a(2, 1), the posterior distribution is | ∼P ∼G π(λ) f(xλ)π(λ) λx+1e 3λ − ∝ | ∝ which means that λ a(x+2, 3). The marginal distribution is ∼G Γ(x+2) (x+1) m(x)= f(xλ)π(λ)dλ = . | ∝ √π3x+2x! √π3x+2 Z (c) When X p eg(10, p), p e(1, 1), the posterior distribution is | ∼N ∼B 2 2 π(λ) f(xλ)π(λ) p10−12(1 p)x−21 . ∝ | ∝ − which means that λ e(10+ 1, x+ 1). The marginal distribution is ∼B 2 2 11+x 1 1 m(x)= f(xλ)π(λ)dλ B 10+ , x+ , | ∝ x 2 2 Z (cid:18) (cid:19) (cid:18) (cid:19) the so-called Beta-Binomial distribution.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.