ebook img

A multivariate Gnedenko law of large numbers PDF

0.22 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A multivariate Gnedenko law of large numbers

1 1 A multivariate Gnedenko law of large numbers 0 2 Daniel Fresen n a J In memory of NigelJ. Kalton 5 2 Abstract. Weshowthattheconvexhullofalargei.i.d. samplefromanon- ] vanishinglog-concavedistributionapproximatesapre-determinedbodyinthe R logarithmicHausdorffdistanceandintheBanach-Mazurdistance. Forp-log- P concave distributionswith1<p<∞(suchasthenormaldistributionwhere . p = 2) we also have approximation in the Hausdorff distance. These are h multivariateversionsoftheGnedenko lawoflargenumberswhichgaurantees t a concentration ofthemaximumandminimumintheonedimensionalcase. m Wegivethreedifferentdeterministicbodiesthatserveasapproximantsto therandombody. Thefirstisthefloatingbodythatservesasamultivariate [ quantile, the second body is given as a contour of the density function, and 1 thethirdbodyisgivenintermsoftheRadontransform. v We end the paper by constructing a probability measure with an inter- 7 estinguniversalityproperty. 8 8 4 1. 1. Introduction 0 The Gnedenko law of large numbers [7] states that if F is the cumulative 1 1 distribution of a probability measure µ on R such that for all ε>0 : v F(t+ε)−F(t) (1.1) lim =∞ i t→∞ 1−F(t+ε) X then there are functions δ, T and P defined on N with r a (1.2) lim δ = 0 n n→∞ (1.3) lim P = 1 n n→∞ suchthatfor anyn∈N andanyi.i.d. sample(γ )n fromµ, withprobabilityP we i 1 n have |max{γ }n−T |<δ i 1 n n Thecondition(1.1)impliessuper-exponentialdecayofthetailprobabilities1−F(t), i.e. for all c>0, lim ect(1−F(t))=0 t→∞ Manythanks toImreB´ar´any, John Fresen,JillFresen, NigelKalton, Alexander Koldobsky, MathieuMeyerandMarkRudelsonfortheircomments,adviceandsupport. Inparticular,Iam gratefultoNigelKaltonforhisfriendshipandallthathetaught me. 1 2 DANIELFRESEN Theconverseisalmosttrueandcanbeachievedifweimposesomesortofregularity onF. Onesuchregularityconditionislog-concavity(seepreliminaries). Ofcourse all of this can be re-worded in multiplicative form. Provided 1−F(t) is regular enough and decays super-polynomially, i.e. for any m∈N, lim tm(1−F(t))=0 t→∞ then (1.2) and (1.3) hold, and with probability P , n max{γ }n i 1 −1 ≤δ n T (cid:12) n (cid:12) (cid:12) (cid:12) Notethatrapiddecayofthe lef(cid:12)thandtailprov(cid:12)idesconcentrationofmin{γ }n, and (cid:12) (cid:12) i 1 that [min{γ }n,max{γ }n]=conv{γ }n. i 1 i 1 i 1 In this paper we extend the Gnedenko law of large numbers to higher dimen- sions. We consider a collection of i.i.d. random vectors {x }n in Rd that follow i 1 a log-concave distribution with non-vanishing density and study their convex hull P = conv{x }n. We show that with high probability, P approximates a deter- n i 1 n ministic body. 2. Main Results Let d ≥ 1, n ≥ d + 1 and let µ be a log-concave probability measure on Rd with non-vanishing density function f. This means that f is of the form f(x)=exp(−g(x))whereg isconvex. Let(x )n denoteasequenceofi.i.d. random i 1 vectors in Rd with distribution µ. The convex body P =conv{x }n is a random n i 1 polytope. For any x∈Rd, define f(x)=infµ(H) H where H runs through the collecteion of all half-spaces that contain x. For any δ >0, we define the floating body (2.1) F ={x∈Rd :f(x)≥δ} δ Note that F is convex and non-empty provided that δ <e−1 (see lemma 5.12 δ e in[11]orlemma3.3in[5]). Wedefinethe logarithmic Hausdorff distance between convex bodies K,L⊂Rd as, dL(K,L)=inf{λ≥1:∃x∈int(K∩L), λ−1(L−x)+x⊂K ⊂λ(L−x)+x} The main result of this paper is that for large n the random body P ap- n proximates the deterministic body F in the logarithmic Hausdorff distance. In 1/n particular we prove, Theorem 1. For all q > 0, all d ∈ N and any probability measure µ on Rd with a non-vanishing log-concave density function, there exist c,c > 0 such that for all n ∈ N with n ≥ d+2, if (x )n is an i.i.d. sample from µ, P = conv{x }n and i 1 n i 1 F is the floating body as in (2.1), then with probability at least 1−c(logn)−q, 1/n e we have loglogn e (2.2) dL(Pn,F1/n)≤1+c logn RANDOM POLYTOPES 3 The strategy of the proof is to use quantitative bounds in the one dimensional case to analyze the Minkowskifunctional of P in different directions. The idea is n simple, however there are some subtle complications. The lack of symmetry is a complicatingfactor,andthefactthatthehalf-spacesofmass1/ndonotnecessarily touch F adds to the intricacy of the proof. 1/n We define f to be p-log-concaveif it is ofthe formf(x)=cexp(−g(x)p)where g is non-negative and convex. For a general log-concave distribution we do not have such an approximationin the Hausdorff distance d however we have, H Theorem 2. For all q >0, p>1 and d∈N, and any probability measure µ on Rd with a non-vanishing p-log-concave density function, there exist c,c > 0 such that for all n ∈ N with n ≥ d+2, if (x )n is an i.i.d. sample from µ, P = conv{x }n i 1 n i 1 and F1/n is the floating body as in (2.1), then with probability at leaset 1−c(logn)−q we have loglogn e (2.3) d (P ,F )≤c H n 1/n (logn)1−p1 Theorem2caneasilybeextendedtoamuchlargerclassoflog-concavedistribu- tions. Usingtheorem1,anyboundonthegrowthrateofdiam(F )automatically 1/n transfers to a bound on d (P ,F ). Inequality (2.2) is optimal while inequality H n 1/n (2.3) is optimal for p=2 (see (6.2) and (6.3)). Wealsostudytwootherdeterministicbodiesthatserveasapproximantstothe random body. Define f♯(x)=inf f(y)d (y) H H ZH where H runs through the collection of all hyperplanes that contain x, and d H stands for Lebesgue measure on H. For any δ >0, define the bodies D = {x∈Rd :f(x)≥δ} δ R = {x∈Rd :f♯(x)≥δ} δ By log-concavity of f, both D and R are convex. δ δ Theorem 3. Let d ∈ N and let µ be a probability measure on Rd with a non- vanishing log-concave density function. Then we have (2.4) limdL(Fδ,Dδ) = 1 δ→0 (2.5) limdL(Fδ,Rδ) = 1 δ→0 SimilarresultsholdintheHausdorffdistanceforlog-concavedistributionsthat decay super-exponentially, however we do not discuss this here. Our prototype example is the class of distributions introduced by Schechtman and Zinn [16] of the form f(x) = cdexp(−||x||p), where 1 ≤ p < ∞ and c = p p p p/Γ(p−1). In this case D = (log(cd/δ))1/pBd and P ≈ (log(cdn))1/pBd. In δ p p n p p factforthese distributionswehaveaquantitativeversionoftheorem3(seeremark 1 near the end of section 7). Of particular interest is the Gaussian distribution, where p=2. It is worth noting that for the standard Gaussian distribution a similar ap- proximation was obtained by B´ar´any and Vu [4] (see remark 9.6 in their paper) who showed that there exist two radii R and r, both functions of n and d, such that for all d ≥ 2 both r,R = (2logn)1/2(1+o(1)) and with ’high probability’ 4 DANIELFRESEN rBd ⊂P ⊂RBd. Their sandwiching result served as a key step in their proof of 2 n 2 the central limit theorem for Gaussian polytopes (asymptotic normality of various functionals such as the volume and the number of faces). The final result of the paper is the following. Let K denote the collection of d all convex bodies in Rd. Theorem 4. For all d ∈ N, there exists a probability measure µ on Rd with the following universality property. Let (x )∞ be an i.i.d. sample from µ, and for each i 1 n ∈ N with n ≥ d+1 let P = conv{x }n. Then with probability 1, the sequence n i 1 (P )∞ is dense in K with respect to the Banach-Mazur distance. n d+1 d Throughout the paper we will make use of variables c, c, c , c , n , m etc. 1 2 0 that may depend on other parameters (including dimension) but not on n. Their values may change from one appearance to the next. e 3. Preliminaries Most of the material in this section is discussed in [1], [2], [12] and [13]. We denote the standard Euclidean norm on Rd by ||·|| . For any ε > 0, an ε-net in 2 Sd−1 is a subset N such that for any distinct ω ,ω ⊂N, ||ω −ω || >ε, and for 1 2 1 2 2 all θ ∈Sd−1 there exists ω ∈N such that ||θ−ω|| ≤ε. Such a subset can easily 2 be constructed using induction. By a standard volumetric argument we have 3 d (3.1) |N|≤ ε (cid:18) (cid:19) By induction, any θ ∈Sd−1 can be expressed as a series ∞ (3.2) θ =ω + ε ω 0 i i i=1 X whereeachω ∈N and0≤ε ≤εi. Toseethis,expressθ =ω +r ,whereω ∈N i i 0 0 0 and ||r || ≤ε. Then express ||r ||−1r ∈Sd−1 in a similar fasion and iterate this 0 2 0 0 procedure. Define the functional ||x|| =max{hx,ωi:ω ∈N} N As an easy consequence of the Cauchy-Schwarz inequality, provided ε ∈ (0,1) we have (3.3) (1−ε)||x|| ≤||x|| ≤||x|| 2 N 2 The geometric meaning of (3.3) is that if we consider the polytope defined by the hyperplanestangentto Bd atpoints in N, this body is slightlybiggerthan Bd but 2 2 not by more than a factor of (1−ε)−1. A convex body is a compact convex subset of Euclidean space with nonempty interior. For a convex body K ⊂Rd that contains the origin as an interior point, its Minkowski functional is defined as ||x|| =inf{λ>0:x∈λK} K for all x ∈ Rd. By convexity of K, one can easily show that ||·|| obeys the K triangle inequality. The dual Minkowski functional is defined as ||y||K◦ =sup{hx,yi:x∈K} RANDOM POLYTOPES 5 for all y ∈Rd, and the polar of K is K◦ ={y ∈Rd :||y||K◦ ≤1} By the Hahn-Banach theorem, K◦◦ =K. The Banach-Mazur distance between two convexbodies K andL is defined as d (K,L)=inf{λ≥1:∃x∈Rd, ∃T, K ⊂TL⊂λ(K−x)+x} BM where T represents an affine transformation of Rd. This is a generalization of the classicalBanach-Mazurdistancebetweennormedspaces(originsymmetricbodies). The Hausdorff distance d between K and L is defined as H d (K,L)=max{maxd(k,L);maxd(K,l)} H k∈K l∈L By convexity this reduces to d (K,L) = sup |suphk,θi−suphl,θi| H θ∈Sd−1 k∈K l∈L = sup |(||θ||K◦ −||θ||L◦)| θ∈Sd−1 We define the logarithmic Hausdorff distance between K and L about a point x∈int(K∩L) as dL(K,L,x)=inf{λ≥1:λ−1(L−x)+x⊂K ⊂λ(L−x)+x} and dL(K,L)=inf{dL(K,L,x):x∈int(K∩L)} provided int(K∩L)6=∅. Note that logdL(K,L,0)= sup |log||θ||K −log||θ||L| θ∈Sd−1 The following relations follow directly from the definitions above, dL(K,L,0) = dL(K◦,L◦,0) (3.4) dL(TK,TL) = dL(K,L) where T is any invertible affine transformation. In addition one can easily check that, dBM(K,L) ≤ dL(K,L)2 (3.5) dH(K,L) ≤ diam(K)(dL(K,L)−1) hence all of our bounds in terms of dL apply equally well to dBM. By a simple compactness argument, there is an ellipsoid of maximal volume E ⊂K. This ellipsoid is called the John ellipsoid [2] associated to K. It can be k shown that E is unique and has the property that K ⊂d(E −x)+x, where x is k k the center of Ek. In particular, dL(Ek,K)≤d. Throughout the paper we will index half-spaces as H ={x∈Rd :hx,θi≥t} θ,t and hyperplanes as H = {x ∈ Rd : hx,θi ≥ t} where θ ∈ Sd−1 and t ∈ R. For θ,t δ > 0, define the convex floating body [17] as K = ∩{H : vol (K ∩H ) ≥ δ θ,t d θ,t (1−δ)vol (K)}. Despite the appearance of an inner product in our indexing of d half-spaces, the operation K 7→ K is independent of Euclidean structure. Note δ that K is a special case of the body F defined by (2.1) for the case when µ is δ δ the uniform probability measure on K. It is known that in this case, the random 6 DANIELFRESEN polytopeP isinsomesensesimilartoK . As anexample,B´ar´anyandLarman n 1/n [3] proved that c′vol (K\K )≤Evol (K\P )≤c′′vol (K\K ) d 1/n d n d 1/n Of course in this context it is trivial that both in the Hausdorff distance and in the logarithmicHausdorffdistance we have the approximationP ≈K , as both n 1/n bodies approximate K. In [6] it is shown that provided λ<8−d, we have (3.6) dL(K,Kλ,x)≤1+8λ1/d where x is the centroid of K. The cone measure on ∂K is defined as µ (E)=vol ({rθ :θ ∈E, r ∈[0,1]}) K d for all measurable E ⊂ ∂K. The significance of the cone measure is that it leads to a natural polar integration formula (see [14]); for all f ∈L (Rd), 1 ∞ (3.7) f(x)dx=d rd−1f(rθ)dµ (θ)dr K ZRd Z0 Z∂K A function f :Rd →[0,∞) is called log-concave (see [9]) if f(λx+(1−λ)y)≥f(x)λf(y)1−λ for all x,y ∈ Rd and all λ ∈ (0,1). As a consequence of the Pr´ekopa-Leindler inequality [1], if x is a random vector with log-concave density and y is any fixed vector, then hx,yi has a log-concave density in R. In this paper we will consider probability measures µ with non-vanishing log-concave densities. Clearly, such density functions can be written in the form f(x)=e−g(x) where g :Rd →R is convex and lim g(x)=∞. Any such g lies above a cone, x→∞ i.e. (3.8) g(x)≥m||x|| −c 2 withm,c>0. Hencealog-concavefunctionsuchasf mustdecayexponentiallyto zero(withuniform exponentialdecayrateinalldirections). Log-concavefunctions are very rigid. One such example of this rigidity (see lemma 5.12 in [11]) is the fact that if H is any half-space containing the centroid of µ, then µ(H)≥e−1. Let 1≤p<∞. If g :Rd →[0,∞) is convex and lim g(x)=∞, then the x→∞ probability distribution with density given by f(x)=ce−g(x)p will be called p-log-concave. This is a natural generalization of the normal dis- tribution. The 1-log-concave distributions are precisely the non-vanishing log- concavedistributions, andif f is p-log-concave,thenit is alsop′-log-concavefor all 1≤p′ ≤p. Let H denote the collection of all d−1 dimensional affine subspaces (hyper- d planes) of Rd. The Radon transform of a log-concave function f : Rd → [0,∞) is the function Rf :H →[0,∞) defined by d Rf(H)= f(y)d (y) H ZH RANDOM POLYTOPES 7 where d is Lebesgue measure on H. The Radon transform is closely related H to the Fourier transform. See [10] for a discussion of these operators and their connections to convex geometry 4. The one dimensional case Let f be a non-vanishing log-concaveprobability density function on R associ- ated to a probability measure µ. In particular, f(t) = e−g(t) where g : R → R is convex. For t∈R, define t J(t) = f(s)ds Z−∞ u(t) = −log(1−J(t)) The cumulative distribution function J is a strictly increasing bijection between R and (0,1). The following lemma is a standard result (see e.g. theorem 5.1 in [11] for the statement, and the references given there). However we include a short proof here for completeness. Lemma 1. u is convex Proof. Assume momentarily that g ∈C2(R). For t∈(0,1) define ψ(t)=f(J−1(1−t)) Note that −g′′(J−1(1−t)) ψ′′(t)= ≤0 ψ(t) Hence ψ is concave. In addition, lim ψ(t) = lim ψ(t) = 0. Hence, the t→0 t→1 functionκ(t)=ψ(t)/tisnon-increasingon(0,1)andthefunctionf(t)/(1−J(t))= κ(1−J(t)) is non-decreasing on R. Since u′(t)=f(t)/(1−J(t)), u is convex. If g ∈/ C2(R), then the result follows by approximation (convolve µ with a Gaussian). (cid:3) The following lemma is a quantitative version of the Gnedenko law of large numbers for log-concaveprobability measures on R. Lemma 2. For all q >0 there exist c,c>0 such that for all n∈N with n≥3, if µ is a probability measure on R with a non-vanishing log-concave density function and cumulative distribution function J, and (γ )n is an i.i.d. sample from µ, then e i 1 with probability at least 1−c(logn)−q we have γ −J−1(1−1/n) loglogn (n) (4.1) ≤c J−e1(1−1/n)−Eµ logn (cid:12) (cid:12) where γ =max{γ }n a(cid:12)nd Eµ denotes the m(cid:12)ean of µ. (n) i 1 Proof. Let a= (logn)−q and b =qlogn. By choosing an appropriate value ofc,the probabilityboundbecomestrivialforalln≤n ;wemaythereforeassume 0 that n > n . As mentioned in the preliminaries (see also lemma 3.3 in [5]), 0 1−J(Eµ) ≥ e−1, hence u(Eµ) ≤ 1. Note that by convexity of u we have the e inequality(s−Eµ)−1(u(s)−u(Eµ))≤(t−s)−1(u(t)−u(s))whichisvalidprovided thatEµ<s<t. Bysettings=J−1(1−b/n)andt=J−1(1−a/n)thisinequality becomes J−1(1−a/n)−J−1(1−b/n) logb−loga (4.2) ≤ J−1(1−b/n)−Eµ logn−logb−1 8 DANIELFRESEN By independence, P{J−1(1−b/n) ≤ γ ≤J−1(1−a/n)} (n) a n b n = 1− − 1− n n (cid:16) (cid:17) (cid:18) (cid:19) ≥ 1−a−e−b andif the event{J−1(1−b/n)≤γ ≤J−1(1−a/n)}occurs,then the conclusion (n) of the theorem holds. (cid:3) 5. The multi-dimensional case Lemma 3. Let d ∈ N, d ≥ 1 and let K and L be convex bodies in Rd such that rBd ⊂K ⊂RBd for some r,R>0. Let 0<ρ<1/2 and 0<ε<(16R/r)−1, and 2 2 let N be an ε-net in Sd−1. Suppose that for each ω ∈N, (5.1) (1−ρ)||ω|| ≤||ω|| ≤(1+ρ)||ω|| L K L Then for all x∈Rd we have (5.2) (1+2ρ+28Rr−1ε)−1||x|| ≤||x|| ≤(1+2ρ+28Rr−1ε)||x|| L K L In particular, (5.3) dL(K,L)≤dL(K,L,0)≤1+2ρ+28Rr−1ε Proof. Note that1+ρ≤(1−ρ)−1 ≤1+2ρand1−ρ≤(1+ρ)−1 ≤1−ρ/2, and the same inequalities hold for ε. Since rBd ⊂K ⊂RBd, we have that 2 2 R−1||x|| ≤||x|| ≤r−1||x|| 2 K 2 for all x∈Rd. Combining this with (5.1) gives R−1(1+ρ)−1 ≤||ω|| ≤r−1(1−ρ)−1 L for all ω ∈N. Consider any θ ∈Sd−1. By the series representation(3.2) and the triangle inequality, ||θ|| ≤r−1(1−ρ)−1(1−ε)−1 L Hence ||x|| ≤ r−1(1−ρ)−1(1−ε)−1||x|| for all x ∈ Rd. Using the triangle L 2 inequality in a bit of a different way, ∞ ||θ|| ≥ ||ω || − ε ||ω || L 0 L i i L i=1 X ≥ R−1(1+ρ)−1−r−1ε(1−ε)−1(1−ρ)−1 ≥ R−1/2−4r−1ε = R−1(1−8Rr−1ε)/2 ≥ (4R)−1 RANDOM POLYTOPES 9 which holds since 8Rr−1ε≤1/2. Thus, ||θ|| ≤ ||ω || +||θ−ω || L 0 L 0 L ≤ (1−ρ)−1||ω || +r−1(1−ρ)−1(1−ε)−1ε 0 K ≤ (1−ρ)−1(||θ|| +||ω −θ|| )+r−1(1−ρ)−1(1−ε)−1ε K 0 K ≤ (1−ρ)−1||θ|| +r−1(1−ρ)−1ε(1+(1−ε)−1) K ≤ (1−ρ)−1||θ|| +Rr−1(1−ρ)−1ε(1+(1−ε)−1)||θ|| K K ≤ (1+2ρ)(1+3Rr−1ε)||θ|| K ≤ (1+2ρ+6Rr−1ε)||θ|| K where ω is the element of N that minimizes ||θ−ω || . On the other hand, 0 0 2 ||θ|| ≤ ||ω || +||θ−ω || K 0 K 0 K ≤ (1+ρ)||ω || +r−1ε 0 L ≤ (1+ρ)(||θ|| +||ω −θ|| )+r−1ε L 0 L ≤ (1+ρ)||θ|| +r−1(1+ρ)(1−ρ)−1(1−ε)−1ε+r−1ε L ≤ (1+ρ)||θ|| +7r−1ε·4R||θ|| L L ≤ (1+ρ+28Rr−1ε)||θ|| L The result follows by positive homogeneity. (cid:3) Proof of theorem 1. By choosing c large enough, the probability bound becomes trivial for n < n . Let n ≥ n and ε = (logn)−1. By applying a 0 0 suitable affine transformation, we may asesume that the John ellipsoid of D1/n is ηBd for some η > 0. We need to use an affine transformation of the form 2 Tx = Mx+x where |detM| = 1. Such a transformation preserves Lebesgue 0 measureandthereforepreservestherelationshipbetweenµandD . ThusηBd ⊂ 1/n 2 D ⊂ dηBd. If H = {x ∈ Rd : hx,θi ≥ t} is a half-space with µ(H ) = 1/n, 1/n 2 θ,t θ,t thenη/2≤t≤2dη(thisisnothingbutaverycourseversionoftheresultsofsection 7). Hence η/2Bd ⊂ F ⊂ 2dηBd, and ηBd acts almost like a John ellipsoid for 2 1/n 2 2 F . 1/n For each θ ∈ Sd−1, the function f (t) = −dµ(H ) is the density of a log- θ dt θ,t concaveprobabilitymeasureµ onRwithcumulativedistributionfunctionJ (t)= θ θ 1−µ(H ). Furthermore, the sequence (hθ,x i)n is an i.i.d. sample from this θ,t i i=1 distribution. Recalling the definition of the dual Minkowski functional, for any y ∈Rd ||y||P◦ = sup{hx,yi:x∈Pn} n = max hx ,yi i i=1...n Weuse thisnotationevenwhen0∈/ P . LetN denoteagenericε-netinSd−1 and n consider the function f (x)=inf{µ(H ):ω ∈N, t=hω,xi} N ω,t For all δ >0, define the ’floating polytope’ e FN ={x∈Rd :f (x)≥δ} δ N Note that f(x) = inf f (x) and F = ∩ FN, where N runs through the col- N N δ Ne δ lection of all ε-nets in Sd−1. By the geometric interpretation of (3.3) we have e e 10 DANIELFRESEN η/2Bd ⊂ FN ⊂ 3dηBd and therefore (3dη)−1Bd ⊂ (FN )◦ ⊂ 2η−1Bd. For each 2 1/n 2 2 1/n 2 θ ∈Sd−1, we have Eµ ≥ J−1(e−1) θ θ ≥ J−1(1/n) θ Combining this and (4.1), with probability at least 1−c(logn)−d−q we have that, ||θ||Pn◦ −Jθ−1(1−1/n) ≤clogleogn J−1(1−1/n)−J−1(1/n) logn (cid:12)θ θ (cid:12) (cid:12) (cid:12) Sinceboth−J−1(1/n)andJ−1(1−1/n)lieintheinterval[η/2,2dη],bothhave θ θ roughly the same order of magnitude and we have (1−ρ)Jθ−1(1−1/n)≤||θ||Pn◦ ≤(1+ρ)Jθ−1(1−1/n) where loglogn ρ=c logn With probability atleast1−cε−d(logn)−d−q =1−c(logn)−q, this happens for all ω ∈N. Hence, e (1−ρ)Pn ⊂F1N/n e which implies that (1−ρ)||θ||Pn◦ ≤||θ||(F1N/n)◦ for all θ ∈Sd−1. On the other hand, for all ω ∈N we have ||ω||Pn◦ ≥ (1−ρ)Jω−1(1−1/n) ≥ (1−ρ)||ω||(FN )◦ 1/n By (5.2), (5.4) (1+2ρ+168dε)−1||x||Pn◦ ≤||x||(F1N/n)◦ ≤(1+2ρ+168dε)||x||Pn◦ for all x ∈ Rd. Let M be any other ε-net in Sd−1. By the calculations above, with probability at least 1−c(logn)−q, (5.5) (1+2ρ+168dε)−e1||x||Pn◦ ≤||x||(F1M/n)◦ ≤(1+2ρ+168dε)||x||Pn◦ for all x ∈ Rd. By the union bound, with probability at least 1−c(logn)−q > 0, both (5.4) and (5.5) hold. Since both FN and FM are deterministic bodies, the 1/n 1/n only way that this can be true is if e (1+2ρ+168dε)−2FN ⊂FM ⊂(1+2ρ+168dε)2FN 1/n 1/n 1/n Since F = ∩ FM, where the intersection is taken over all ε-nets in Sd−1, 1/n M 1/n we have (1+2ρ+168dε)−2FN ⊂F ⊂(1+2ρ+168dε)2FN 1/n 1/n 1/n Combining this with the polar of (5.4) gives that with probability at least 1−c(logn)−q we have (1+2ρ+168dε)−3P ⊂F ⊂(1+2ρ+168dε)3P n 1/n n e from which the result follows. (cid:3)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.