ebook img

Asymptotic shape of the convex hull of isotropic log-concave random vectors PDF

0.29 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Asymptotic shape of the convex hull of isotropic log-concave random vectors

Asymptotic shape of the convex hull of isotropic log-concave random vectors Apostolos Giannopoulos, Labrini Hioni and Antonis Tsolomitis 6 1 0 2 Abstract n Let x1,...,xN be independent random points distributed according to an isotropic log-concave a measure µon Rn,and consider therandom polytope J 0 KN :=conv x1,..., xN . {± ± } 1 We provide sharp estimates for the quermaßintegrals and other geometric parameters of KN in the ] range cn 6N 6exp(n); these complement previous results from [13] and [14] that were given for the G rangecn6N 6exp(√n). Oneofthebasicnewingredientsinourwork isarecentresult ofE.Milman M that determines themean width of thecentroid body Zq(µ) of µfor all 16q6n. . h 1 Introduction t a m The purpose of this work is to add new information on the asymptotic shape of random polytopes whose [ vertices have a log-concavedistribution. Without loss of generality we shall assume that this distribution is 1 also isotropic. Recall that a convex body K in Rn is calledisotropic if it has volume 1, it is centered, i.e. its v center of mass is at the origin, and its inertia matrix is a multiple of the identity: there exists a constant 2 L >0 such that K 5 2 2 (1.1) hx,θi2dx=L2K 0 ZK . 1 for every θ in the Euclidean unit sphere Sn 1. More generally, a log-concave probability measure µ on Rn − 0 is called isotropic if its center of mass is at the origin and its inertia matrix is the identity; in this case, the 6 isotropic constant of µ is defined as 1 : v (1.2) L := sup f (x) 1/n, µ µ Xi x∈Rn (cid:0) (cid:1) r where fµ is the density of µ with respect to the Lebesgue measure. Note that a centered convex body a K of volume 1 in Rn is isotropic if and only if the log-concave probability measure µ with density x K 7→ Ln1 (x) is isotropic. K K/LK Averywell-knownopenquestioninthe theoryofisotropicmeasuresisthe hyperplaneconjecture,which asks if there exists an absolute constant C >0 such that (1.3) L :=sup L :µ is an isotropic log-concave measure on Rn 6C n µ { } for all n > 1. Bourgain proved in [9] that L 6 c√4nlogn (more precisely, he showed that L 6 c√4nlogn n K for every isotropic symmetric convex body K in Rn), while Klartag [18] obtained the bound L 6 c√4n. A n second proof of Klartag’s estimate appears in [20]. The study of the asymptotic shape of random polytopes whose vertices have a log-concave distribution was initiated in [13] and [14]. Given an isotropic log-concave measure µ on Rn, for every N > n we considerN independentrandompointsx ,...,x distributedaccordingtoµanddefinetherandompolytope 1 N 1 K :=conv x ,..., x . The main idea in these works was to compare K with the L -centroid body N 1 N N q {± ± } of µ for a suitable value of q; roughlyspeaking, K is close to the body Z (µ) with high probability. N log(2N/n) Recall that the L -centroid bodies Z (µ), q >1, are defined through their support function h , which is q q Zq(µ) given by 1/q (1.4) h (y):= ,y = x,y qdµ(x) . Zq(µ) kh· ikLq(µ) (cid:18)ZRn|h i| (cid:19) These bodies incorporate information about the distribution of linear functionals with respect to µ. The L -centroid bodies were introduced, under a different normalization, by Lutwak and Zhang in [23], while in q [29] for the first time, and in [30] later on, Paouris used geometric properties of them to acquire detailed information about the distribution of the Euclidean norm with respect to µ. It was proved in [13] that, given any isotropic log-concave measure µ on Rn and any cn 6 N 6 en, the random polytope K defined by N independent random points x ,...,x which are distributed according N 1 N to µ satisfies, with high probability, the inclusion (1.5) K c Z (µ) N 1 log(N/n) ⊇ (for the precise statement see Fact 3.2). Then, using the fact that the volume of the L -centroid bodies q satisfies the lower bounds Z (µ)1/n >c q/n if q 6√n and Z (µ)1/n >c L 1 q/n if √n6q 6n (see | q | 2 | q | 3 −µ Section 2), we see that for n6N 6e√n we have p p log(2N/n) (1.6) K 1/n >c , N 4 | | √n p while in the range e√n 6N 6en we have log(2N/n) (1.7) K 1/n >c L 1 | N| 5 −µ √n p with probability exponentially close to 1. On the other hand, one can check that for every α>1 and q >1, (1.8) E σ ( θ :h (θ)>αh (θ) ) 6Nα q, n { KN Zq(µ) } − whereσ isthe rotationallyinvari(cid:2)antprobabilitymeasureonthe E(cid:3)uclideanunitsphereSn 1. Thisestimate n − is sufficient for some sharp upper bounds. First, for all n6N 6exp(n) one has (1.9) E w(K ) 6c w(Z (µ)), N 6 logN where the mean width w(C) of a convex(cid:2)body C (cid:3)in Rn containing the origin,is defined as twice the average of its support function on Sn 1: − w(C)= h (θ)dσ (θ). C n ZSn−1 Second, one has log(2N/n) (1.10) K 1/n 6c N 7 | | √n p with probability greater than 1 1, where C >0 is an absolute constant. − N In [14] these results were extended to the full family of quermaßintegrals W (K ) of K . These are n k N N − defined through Steiner’s formula n n (1.11) K+tBn = W (K)tn k, | 2| k n−k − k=0(cid:18) (cid:19) X 2 whereW (K)isthemixedvolumeV(K,k;Bn,n k). Itismoreconvenienttoexpresstheestimatesusing n−k 2 − a normalized variant of W (K): for every 16k 6n we set n k − 1/k 1/k W (K) 1 n k (1.12) Qk(K)= − = PF(K) dνn,k(F) , (cid:18) ωn (cid:19) ωk ZGn,k| | ! where the last equality followsfrom Kubota’s integralformula (see Section 2 for backgroundinformationon mixed volumes). Then, one has the following results on the expectation of Q (K ) for all values of k: k N Theorem 1.1 (Dafnis, Giannopoulos and Tsolomitis, [14]). If n2 6N 6exp(cn) then for every 16k 6n we have (1.13) L 1 logN .E Q (K ) .w(Z (K)). −µ k N logN In the range n2 6N 6exp(√n) onephas an asym(cid:2)ptotic fo(cid:3)rmula: for every 16k 6n, (1.14) E Q (K ) logN. k N ≃ All these estimates remain valid for n1+(cid:2)δ 6 N 6(cid:3)n2,pwhere δ (0,1) is fixed, if we allow the constants ∈ to depend onδ. Working in the range N n is possible, but requires some additional attention (see e.g. [5] ≃ for the case of mean width). A more careful analysis (which can be found in [14, Theorem 1.2]) shows that if n2 6 N 6 exp(√n) then, for any s>1, a random K satisfies, with probability greater than 1 N s, N − − (1.15) Q (K )6c (s) logN k N 1 for all 16k6n and, with probability greater than 1 expp( √n), − − (1.16) Q (K )>c logN k N 8 for all 16k6n, where c (s)>0 depends only on s, andpc >0 is an absolute constant. 1 8 Anaturalquestionthatarisesiswhethertheseresultscanbeextendedtothefullrangecn6N 6exp(n) of values of N. If one decides to follow the approach of [13] and [14] then there are two main obstacles. The first one is that the lower bound Z (µ)1/n >c q/n is currently known only in the range q 6√n. In q | | fact, proving the same for larger values of q would lead to improved estimates on L (for example, see the p n computationafterLemma2.2in[20]). Thesecondonewasthat,untilrecently,asharpestimateonthemean width of Z (µ) was known only for q 6 √n; G. Paouris proved in [29] that for every isotropic log-concave q measure µ on Rn and any q 6√n one has (1.17) w Z (K) 6c √q. q 9 Recently, E. Milman [25] obtained the same u(cid:0)pper bo(cid:1)und (modulo logarithmic terms) for q beyond √n. Theorem 1.2 (E. Milman, [25]). For every isotropic log-concave measure µ on Rn and for all q [√n,n] ∈ we have (1.18) w(Z (µ))6c √q log2(1+q). q 10 Animmediate consequenceofthis resultis thatitprovidesanew boundforthe meanwidthofanorigin symmetric isotropic convexbody K in Rn. In this caseit is knownthat Z (K) cK, andwe conclude that n ⊇ (1.19) w(K)6C √n log2(1+n)L 1 K improving the earlier known bound w(K) 6 C n3/4L of Hartzoulaki, from her PhD thesis [17]. We note 2 K here that not all of the logarithmic terms in (1.19) can be removed, as the example of Bn/Bn 1/n shows. 1 | 1| Using E. Milman’s theorem we can show the following. 3 Theorem1.3. Letx ,...,x beindependentrandompointsdistributedaccordingtoanisotropiclog-concave 1 N measure µ on Rn, and consider the random polytope K := conv x ,..., x . If exp(√n) 6 N 6 N 1 N {± ± } exp(cn) then for every 16k 6n we have (1.20) L 1 logN .E Q (K ) . logN loglogN 2. −µ k N p (cid:2) (cid:3) p (cid:0) (cid:1) Next we provide estimates for Q (K ) for “most” K : k N N Theorem1.4. Letx ,...,x beindependentrandompointsdistributedaccordingtoanisotropiclog-concave 1 N measure µ on Rn, and consider the random polytope K :=conv x ,..., x . For all exp(√n)6 N 6 N 1 N {± ± } exp(n) and s>1 we have (1.21) Q (K )6c (s) logN (loglogN)2, k N 2 for all 16k <n, with probability greater than 1 Np s. − − WealsoprovideestimatesonthevolumeradiusofarandomprojectionP (K )ofK ontoF G (in F N N n,k ∈ termsofn,k andN)inthe rangee√n 6N 6en; these extendthe sharpestimate v.rad(P (K )) √logN F N ≃ that was obtained in [14] for the case N 6e√n. Theorem 1.5. If exp(√n) 6 N 6 ecn and s > 1, then a random K satisfies with probability greater N than 1 max N s,e c11√N the following: for every 1 6 k 6 n there exists a subset M of G with − − n,k n,k − { } ν (M )>1 e c12k such that n,k n,k − − 1/k P (K ) c13L−µ1 logN 6v.rad(PF(KN)):= | Fω N | (cid:18) k (cid:19) p 2 (1.22) 6c (s) logN loglogN 3 for all F M . p (cid:0) (cid:1) n,k ∈ InSection4weprovideanalternativeproofofanestimateofAlonso-Guti´errez,Dafnis,Hern´andez-Cifre and Prochno from [3] on the k-th mean outer radius (1.23) R˜ (K )= R(P (K ))dν (F) k N F N n,k ZGn,k of a random K , as a function of N,n and k. N Theorem1.6. Letx ,...,x beindependentrandompointsdistributedaccordingtoanisotropiclog-concave 1 N measure µ on Rn, and consider the random polytope K := conv x ,..., x . If n 6 N 6 exp(√n) N 1 N {± ± } then, for all 16k 6n and s>0 one has (1.24) c (s)max √k, log(N/n) 6R˜ (K )6c (s)max √k, logN 4 k N 5 n p o n p o with probability greater than 1 N s, where c (s),c (s) are positive constants depending only on s. − 4 5 − We provideaformulaforR˜ (K )whichisvalidforallcn6N 6exp(n). This allowsustorecover(and k N explain) the sharp estimate of Theorem 1.6 for “small” values of N and to obtain its analogue for “large” values of N; see Theorem 4.5. InSection5weobtainestimatesontheregularityofthecoveringnumbersandthedualcoveringnumbers of a random K . In a certain range of values of N, these allow us to conclude that a random K is in N N α-regular M-position with α 1 (see Section 5 for definitions and terminology). ∼ 4 Theorem 1.7. Let µ be an isotropic log-concave measure on Rn. Then, assuming that n2 6 N 6 exp (nlogn)2/5 , we have that a random K satisfies with probability greater than 1 N 1 the entropy N − − estimates (cid:0) (cid:1) n(logn)2log(1+t) max logN(K ,tr Bn),logN(r Bn,tK ) 6c { N N 2 N 2 N } 14 t for every t>1, where r =√logN and c >0 is an absolute constant. N 14 As an application we estimate the average diameter of k-dimensional sections of a random K , defined N by (1.25) D˜ (K )= R(K F)dν (F). k N N n,k ∩ ZGn,k The discussion shows that the behavior of D˜ (K ) is not always the same as that of R˜ (K ). In order to k N k N give an idea of the results, let us mention here the following simplified version. Theorem 1.8. Let µ be an isotropic log-concave measure on Rn and a,b (0,1). ∈ (i) If k 6bn then a random K satisfies with probability 1 N 1 N − − D˜ (K )6c logN if n2 6N 6exp(√n) k N b and p D˜ (K )6c logN(loglogN)2 if exp(√n)6N 6exp(n). k N b (ii) If k >an and N 6exp((nlogn)2p/5) then a random K satisfies with probability 1 exp( √n) N − − √logN c 6D˜ (K ). a log3n k N where c ,c are positive constants that depend only on a and b respectively. a b We conclude this paper with a brief discussion of the interesting (open) question whether the isotropic constant of a random K is bounded by a constant independent from n and N. The first class of random N polytopesK inRn forwhichuniformboundswereestablishedwastheclassofGaussianrandompolytopes. N Klartag and Kozma proved in [19] that if N > n and if G ,...,G are independent standard Gaussian 1 N random vectors in Rn, then the isotropic constant of the random polytope K = conv G ,..., G is N 1 N {± ± } bounded by an absolute constant C > 0 with probability greater than 1 Ce cn. The same idea works in − − thecasewheretheverticesx ofK aredistributedaccordingtoanisotropicψ -measureµ; theboundthen j N 2 depends onlyonthe ψ -constantofµ. Alonso-Guti´errez[2]andDafnis,GiannopoulosandGu´edon[12]have 2 applied the same more or less method to obtain a positive answer in the case where the vertices of K are N chosen from the unit sphere or an unconditional isotropic convex body respectively. We show that, in the general isotropic log-concave case, the method of Klartag and Kozma gives the bound O( log(2N/n)) if N 6exp(√n) (a proofalong the same lines and an extension to randomperturbations of randompolytopes p appear in [4]). 2 Notation and background material We work in Rn, which is equipped with a Euclidean structure , . We denote by the corresponding 2 h· ·i k·k Euclideannorm, andwrite Bn for the Euclideanunit ball, and Sn 1 for the unit sphere. Volume is denoted 2 − by . Wewriteω forthe volumeofBn andσ forthe rotationallyinvariantprobabilitymeasureonSn 1. The|·G| rassmann mnanifold G of k-di2mensionnal subspaces of Rn is equipped with the Haar probabil−ity n,k measure ν . Let 1 6 k 6 n and F G . We will denote the orthogonal projection from Rn onto F by n,k n,k ∈ P . We also define B =Bn F and S =Sn 1 F. F F 2 ∩ F − ∩ 5 The letters c,c,c ,c etc. denote absolute positive constants whose value may change from line to line. ′ 1 2 Whenever we write a b, we mean that there exist absolute constants c ,c > 0 such that c a 6 b 6 c a. 1 2 1 2 Similarly,if K,L Rn≃we will write K L if there exist absoluteconstants c ,c >0 suchthat c K L 1 2 1 c K. We also wri⊆te A for the homothet≃ic image of volume 1 of a convex body A Rn, i.e. A:= A⊆. ⊆ 2 ⊆ A1/n AconvexbodyisacompactconvexsubsetC ofRn withnon-emptyinterior. WesaythatC iss|y|mmetric if x C wheneverx C. WesaythatC iscenteredifithascenterofmassattheorigini.e. x,θ dx=0 for−ev∈ery θ Sn 1. T∈he support function h : Rn R of C is defined by h (x) = max xC,yh : yi C . − C C ∈ → {hR i ∈ } For each <p< , p=0, we define the p-mean width of C by −∞ ∞ 6 1/p (2.1) w (C):= hp(θ)dσ (θ) . p C n (cid:18)ZSn−1 (cid:19) ThemeanwidthofCisthequantityw(C)=w (C). TheradiusofCisdefinedasR(C)=max x :x C 1 2 {k k ∈ } and, if the origin is an interior point of C, the polar body C of C is ◦ (2.2) C := y Rn : x,y 61for allx C . ◦ { ∈ h i ∈ } Finally, if C is a symmetric convex body in Rn and is the norm induced to Rn by C, we set C k·k M(C)= x dσ (x) C n ZSn−1k k and write b(C) for the smallestpositive constant b with the property x 6b x for all x Rn. From V. C 2 k k k k ∈ Milman’s proof of Dvoretzky’s theorem (see [6, Chapter 5]) we know that if k 6 cn(M(C)/b(C))2 then for most F G we have C F 1 B . ∈ n,k ∩ ≃ M(C) F 2.1 Quermaßintegrals Let denote the class of non-empty compact convex subsets of Rn. The relation between volume and n K the operations of addition and multiplication of compact convex sets by nonnegative reals is described by Minkowski’s fundamental theorem: If K ,...,K , m N, then the volume of t K + +t K is a 1 m n 1 1 m m ∈K ∈ ··· homogeneous polynomial of degree n in t >0: i (2.3) t K + +t K = V(K ,...,K )t t , | 1 1 ··· m m| i1 in i1··· in 16i1,X...,in6m wherethecoefficientsV(K ,...,K )canbe chosentobe invariantunderpermutationsoftheirarguments. i1 in The coefficient V(K ,...,K ) is called the mixed volume of the n-tuple (K ,...,K ). i1 in i1 in Steiner’s formula is a special case of Minkowski’s theorem; if K is a convex body in Rn then the volume of K+tBn, t>0, can be expanded as a polynomial in t: 2 n n (2.4) K+tBn = W (K)tn k, | 2| k n−k − k=0(cid:18) (cid:19) X where W (K) := V(K,k;Bn,n k) is the (n k)-th quermaßintegral of K. It will be convenient for us n−k 2 − − to work with a normalized variant of W (K): for every 16k 6n we set n k − 1/k 1 (2.5) Q (K)= P (K) dν (F) . k F n,k ωk ZGn,k| | ! Note that Q (K)=w(K). Kubota’s integral formula 1 ω n (2.6) W (K)= P (K)dν (F) n k F n,k − ωk ZGn,k| | 6 shows that W (K) 1/k n k (2.7) Qk(K)= − . ω (cid:18) n (cid:19) The Aleksandrov-Fenchelinequality states that if K, L, K ,...,K , then 3 n n ∈K (2.8) V(K,L,K ,...,K )2 >V(K,K,K ,...,K )V(L,L,K ,...,K ). 3 n 3 n 3 n This implies that the sequence (W (K),...,W (K)) is log-concave: we have 0 n (2.9) Wjk−i >Wik−jWkj−i if 0 6 i < j < k 6 n. Taking into account (2.7) we conclude that Q (K) is a decreasing function of k. For k the theory of mixed volumes we refer to [33]. 2.2 L -centroid bodies of isotropic log-concave measures q We denote by the class of all Borel probability measures on Rn which are absolutely continuous with n P respect to the Lebesgue measure. The density of µ is denoted by f . We say that µ is centered n µ n ∈P ∈P if, for all θ Sn 1, − ∈ (2.10) x,θ dµ(x) = x,θ f (x)dx=0. µ ZRnh i ZRnh i A measure µ on Rn is called log-concave if µ(λA+(1 λ)B) > µ(A)λµ(B)1 λ for all compact subsets A − and B of Rn and all λ (0,1). A function f :Rn [0−, ) is called log-concave if its support f >0 is a ∈ → ∞ { } convex set and the restriction of logf to it is concave. Borell has provedin [8] that if a probability measure µ is log-concaveand µ(H)<1 for every hyperplane H, then µ and its density f is log-concave. Note n µ that if K is a convex body of volume 1 in Rn then the Brunn-∈MPinkowski inequality implies that 1 is the K density of a log-concave measure. If µ is a log-concave measure on Rn with density f , we define the isotropic constant of µ by µ 1 (2.11) Lµ := supx∈Rnfµ(x) n [detCov(µ)]21n, f (x)dx (cid:18) Rn µ (cid:19) where Cov(µ) is the covariance matrix of µRwith entries x x f (x)dx x f (x)dx x f (x)dx (2.12) Cov(µ) := Rn i j µ Rn i µ Rn j µ . ij f (x)dx − f (x)dx f (x)dx R Rn µ R Rn µ R Rn µ Note that L is an affine invariant ofRµ and does not deRpend on the chRoice of the Euclidean structure. We µ say that a log-concave probability measure µ on Rn is isotropic if it is centered and Cov(µ) is the identity matrix. Recallthatifµisalog-concaveprobabilitymeasureonRn andifq >1thenthe L -centroidbodyZ (µ) q q of µ is the symmetric convex body with support function 1/q (2.13) h (y):= x,y qdµ(x) . Zq(µ) (cid:18)ZRn|h i| (cid:19) Observe that µ is isotropic if and only if it is centered and Z (µ)=Bn. From Ho¨lder’s inequality it follows 2 2 that Z (µ) Z (µ) Z (µ) for all 1 6 p 6 q < . Conversely, using Borell’s lemma (see [28, Appendix 1 p q ⊆ ⊆ ∞ III]), one can check that q (2.14) Z (µ) c Z (µ) q 1 p ⊆ p 7 for all 16p<q. In particular, if µ is isotropic, then R(Z (µ))6c q. q 2 For any α>1 and any θ Sn 1 we define the ψ -norm of x x,θ as follows: − α ∈ 7→h i x,θ α (2.15) ,θ :=inf t>0: exp |h i| dµ(x)62 , kh· ikψα (cid:26) ZRn (cid:18)(cid:16) t (cid:17) (cid:19) (cid:27) provided that the set on the right hand side is non-empty. We say that µ satisfies a ψ -estimate with α constant b =b (θ) in the direction of θ if we have α α ,θ 6b ,θ . kh· ikψα αkh· ik2 We say that µ is a ψ -measure with constant B >0 if α α ,θ sup kh· ikψα 6B . α θ∈Sn−1 kh·,θik2 From Borell’slemma it follows that everylog-concavemeasure is a ψ -measurewith constantC, where C is 1 an absolute positive constant. From [29] and [30] one knows that the “q-moments” 1/q (2.16) I (µ):= x qdx , q ( n,+ ) 0 , q (cid:18)ZRnk k2 (cid:19) ∈ − ∞ \{ } of the Euclidean norm with respect to an isotropic log-concave probability measure µ on Rn are equivalent toI (µ)=√naslongas q 6√n. Twomainconsequencesofthis factare: (i)Paouris’deviationinequality 2 | | (2.17) µ( x Rn : x >c t√n )6exp t√n 2 3 { ∈ k k } − for every t > 1, where c > 0 is an absolute constant, and (ii) Paou(cid:0)ris’ sm(cid:1)all ball probability estimate: for 3 any 0<ε<ε , one has 0 (2.18) µ( x Rn : x <ε√n )6εc4√n, 2 { ∈ k k } where ε ,c >0 are absolute constants. 0 4 The next theorem summarizes our knowledge on the mean width of Z (µ). The first statement was q proved by Paouris in [29], while the second one is E. Milman’s Theorem 1.2. Theorem 2.1. Let µ be an isotropic log-concave measure on Rn. If 16q 6√n, then (2.19) w(Z (µ)) √q. q ≃ Moreover, for all q [√n,n] we have ∈ (2.20) w(Z (µ))6c √q log2(1+q). q 5 Thenexttheoremsummarizesourknowledgeonthe volumeradiusofZ (µ). Thefirststatementfollows q fromthe resultsof[29]and[20],while thelefthand-sideinthe secondonewasobtainedin[24]andthe right hand-side in [29]. Theorem 2.2. Let µ be an isotropic log-concave measure on Rn. If 16q 6√n then (2.21) Z (µ)1/n q/n, q | | ≃ while if √n6q 6n then p (2.22) c L 1 q/n6 Z (µ)1/n 6c q/n. 6 −µ | q | 7 The reader may find a detailed expospition of the theory of ispotropic log-concave measures in the book [11]. 8 3 Estimates for the Quermaßintegrals We start with the proof of Theorem 1.3. Recall that the equivalence E Q (K ) √logN in the range k N ≃ n2 6 N 6 exp(√n) was proved in [14] (see Theorem 1.1). What is new is the right hand-side estimate in (1.20). However,in[14] itwasprovedthatE[Q (K )]6w(Z (K))for(cid:2)the full r(cid:3)angeofN. So the result k N logN follows immediately by applying Theorem 2.1. To proveTheorem1.4 we will need Lemma 4.2 from[14] which holds true in the more generalsetting of isotropic log-concave random vectors. Lemma 3.1. Let µ be an isotropic log-concave measure on Rn. For every n2 6N 6exp(cn) and for every q >logN and r>1, we have hq (θ) (3.1) KN dσ (θ)6(c r)q ZSn−1 hqZq(µ)(θ) n 1 with probability greater than 1 r q, where c >0 is an absolute constant. − 1 − Proof of Theorem 1.4. Let exp(√n)6N 6exp(n). Applying Ho¨lder’s inequality we get w(K )= h (θ)dσ (θ) N KN n ZSn−1 1/p q 1/q 6 h (θ) pdσ (θ) hKN(θ) dσ (θ) (cid:18)ZSn−1 Zq(µ) n (cid:19) (cid:18)ZSn−1(cid:18)hZq(µ)(θ)(cid:19) n (cid:19) (cid:0) (cid:1) q 1/q h (θ) =w Z (µ) KN dσ (θ) , p q n (cid:18)ZSn−1(cid:18)hZq(µ)(θ)(cid:19) (cid:19) (cid:0) (cid:1) where p is the conjugate exponent of q. If we now choose q =logN >√n and use Lemma 3.1 we arrive at w(K )6c rw Z (µ) N 1 p q withprobabilitygreaterthan1 r q. Since q =logN itfo(cid:0)llowst(cid:1)hatp<2andthus w (Z (µ))isequivalent − p q − to w(Z (µ)) (see [6, Chapter 5]). Using this and applying Theorem 1.2 we conclude that q 2 w(K )6c r logN loglogN N 2 with probability greater than 1 r logN. Choosinpg r =e(cid:0)we compl(cid:1)ete the proof of (1.21). ✷ − − We can also give estimates on the volume radius of a random projection P (K ) of K onto F G F N N n,k ∈ in terms of n,k and N. In [14] it was shown that if n2 6 N 6 exp(√n) then, a random K satisfies with N probability greater than 1 N s the following: for every 16k 6n, − − (3.2) c logN 6v.rad(P (K ))6c (s) logN 3 F N 4 withprobabilitygreaterthan1 epc5k withrespecttotheHaarmeaspureν onG . Weextendthisresult − n,k n,k − to the case exp(√n)6N 6exp(n). For the proof we will use Theorem 1.1 from [13], which was already mentioned in the introduction. We formulate it in the more general setting of isotropic log-concave random vectors (the probability estimate in the statement makes use of [1, Theorem 3.13]: if γ > 1 and Γ : ℓn ℓN is the random operator Γ(y) = ( x ,y ,... x ,y ) defined by the vertices x ,...,x of K the2n→P( Γ2 : ℓn ℓN > γ√N) 6 h 1 i h N i 1 N N k 2 → 2 k exp( c γ√N) for all N >cγn—see [13] for the details). 0 − 9 Fact 3.2. Letµ bean isotropic log-concavemeasureonRn andletx ,...,x beindependent random vectors 1 N distributed according to µ, with N >c n where c >1 is an absolute constant. Then, for all q 6c log(N/n) 1 1 2 we have that (3.3) K c Z (µ) N 3 q ⊇ with probability greater than 1 exp( c √N). 4 − − Proof of Theorem 1.5. For the upper bound we use (1.21) and Kubota’s formula to get 1/k 1 2 P (K ) dν (F) 6c (s) logN loglogN L . F N n,k 6 K ωk ZGn,k| | ! p (cid:0) (cid:1) Applying now Markov’s inequality we get that with probability greater than 1 t k with respect to the − − Haar measure ν on G we have n,k n,k 1/k |PF(KN)| 6c (s)t logN loglogN 2. 6 ω (cid:18) k (cid:19) p (cid:0) (cid:1) Choosing t=e proves the result. For the lower bound integrating in polar coordinates and using Ho¨lder’s inequality we have P (K ) 1 (3.4) | F◦ N | dν (F)= dσ (θ)dν (F) ω n,k hk (θ) F n,k ZGn,k k ZGn,kZSF PF(KN) 1 = dσ (θ)dν (F) hk (θ) F n,k ZGn,kZSF KN k/n 1 6 dσ (θ)dν (F) hn (θ) F n,k ZGn,kZSF KN ! 1 k/n = dσ (θ) (cid:18)ZSn−1 hnKN(θ) n (cid:19) k/n K = | N◦| . ω (cid:18) n (cid:19) Apply now the Blaschke-Santalo´ inequality and the fact that K c Z (µ) (with probability greater N 7 logN ⊇ than 1 exp( c√N) (notice that logN logN/n for the range of N we use)) to get − − ≃ k/n k/n k/n K ω ω (3.5) | N◦| 6 n 6 n . ω K c Z (µ) (cid:18) n (cid:19) (cid:18)| N|(cid:19) (cid:18)| 7 logN |(cid:19) Since logN is greater than √n we can apply the inequality Z (K)1/n >cL 1 (logN)/n to arrive at | logN | −µ p k P (K ) c L (3.6) | F◦ N |dν (F)6 8 µ . n,k ω √logN ZGn,k k (cid:18) (cid:19) Finally, we apply Markov’sinequality and the reverse Santalo´ inequality of Bourgainand V. Milman [10] to complete the proof. ✷ 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.