ebook img

Edgeworth-type expansion in the entropic free CLT PDF

0.35 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Edgeworth-type expansion in the entropic free CLT

EDGEWORTH-TYPE EXPANSION IN THE ENTROPIC FREE CLT G. P. CHISTYAKOV1,2 AND F. GO¨TZE1,2 7 1 0 2 n Abstract. We prove an expansion for densities in the free CLT and apply this result a to anexpansionin the entropic free centrallimit theoremassuming a moment condition J of order four for the free summands. 6 1 1. Introduction ] R Free convolutions were introduced by D. Voiculescu [32], [33] and have been studied P intensively in context of non commutative probability. The key concept here is the notion . h of freeness, which can be interpreted as a kind of independence for non commutative t a random variables. As in classical probability theory where the concept of independence m gives rise to the classical convolution, the concept of freeness leads to a binary operation [ on the probability measures, the free convolution. Many classical results in the theory 1 of addition of independent random variables have their counterparts in Free Probability, v such as the Law of Large Numbers, the Central Limit Theorem, the L´evy-Khintchine 2 5 formula and others. We refer to Voiculescu, Dykema and Nica [34], Hiai and Petz [20], 3 and Nica and Speicher [26] for an introduction to these topics. 4 0 In this paper we obtain an analogue of Esseen’s expansion for a density of normalized 1. sums of free identically distributed random variables under a fourth moment assumption 0 on the free summands. Using this expansion we establish the rate of convergence of the 7 free entropy of normalized sums of free identically distributed random variables. 1 : The paper is organized as follows. In Section 2 we formulate and discuss the main v i results of the paper. In Section 3 and 4 we state auxiliary results. In Section 5 we discuss X the passage to probability measures with bounded supports. In Section 6 we obtain a lo- r a cal asymptotic expansion for a density in the CLT for free identically distributed random variables. In Section 7 we study the behaviour of subordination functions in the free CLT for truncated free summands. In Section 8 we discuss the closeness of subordination functions in the free CLT for bounded and unbounded free random variables. In Section 9 we investigate the rate of convergence for densities in the free CLT in L ( ,+ ) and 1 −∞ ∞ Section 10 is devoted to study the rate of convergence for the free entropy of normalized Date: January, 2017. 1991 Mathematics Subject Classification. Primary 46L50, 60E07;secondary 60E10. Key words and phrases. Free random variables, Cauchy’s transform, free entropy, free central limit theorem. 1) Faculty of Mathematics , University of Bielefeld, Germany. 2) Research supported by SFB 701. 1 2 G. P.Chistyakov and F. G¨otze sums of free identically distributed random variables. In Section 11 we derive rates of con- vergence for the free Fisher information of normalized sums of free identically distributed random variables. 2. Results Denote by the family of all Borel probability measures defined on the real line R. Let µ ⊞ ν beMthe free (additive) convolution of µ and ν introduced by Voiculescu [32] for compactly supported measures. Free convolution was extended by Maassen [24] to measures with finite variance and by Bercovici and Voiculescu [8] to the class . Thus, µ⊞ν = (X +Y), where X and Y are free random variables such that µ = M(X) and L L ν = (Y). L Henceforth X,X ,X ,... stands for a sequence of identically distributed random vari- 1 2 ables with distribution µ = (X). Define m (µ) := ukµ(du), where k = 0,1,.... k R L The classical CLT says that if X ,X ,... are indRependent and identically distributed 1 2 random variables with a probability distribution µ such that m (µ) = 0 and m (µ) = 1, 1 2 then the distribution function F (x) of n X +X + +X 1 2 n Y := ··· (2.1) n √n tends to the standard Gaussian law Φ(x) as n uniformly in x. → ∞ A free analogue of this classical result was proved by Voiculescu [31] for bounded free random variables and later generalized by Maassen [24] to unbounded random variables. Other generalizations can be found in [9], [10], [16], [21]–[23], [27], [37], [38]. For t > 0, the centered semicircle distribution of variance t is the probability measure with density p (x) := 1 (4t x2) , x R, where a := max a,0 for a R. Denote wt 2πt − + ∈ + { } ∈ by µ the probability meapsure with the distribution function w (x). In the sequel we use wt t the notations w (x) = w(x). 1 When the assumption of independence is replaced by the freeness of the non commuta- tive random variables X ,X ,...,X , the limit distribution function of (2.1) is the semi- 1 2 n circle law w(x). We denote as well by µ the probability measure with the distribution n function F (x). n It was proved in [5] that if the distribution µ of X is not a Dirac measure, then in the free case F (x) is Lebesgue absolutely continuous when n n = n (µ) is sufficiently n 1 1 ≥ large. Denote by p (x) the density of F (x). n n In the sequel we denote by c(µ),c (µ),c (µ),... positive constants depending on µ 1 2 only. By c(µ) we denote generic constants in different (or even in the same) formula. The symbols c (µ),c (µ),... will denote explicit constants. By ε denote positive numbers 1 2 nk { } such that ε 0 as n . nk → → ∞ Wang [38] proved that under the condition m (µ) < the density p (x) of F (x) is 2 n n ∞ continuous for sufficiently large n and p (x) c(µ), x R. (2.2) n ≤ ∈ Rate of convergence 3 Assume that m (µ) < ,m (µ) = 0,m (µ) = 1 and denote 4 1 2 ∞ m (µ) m (µ) m2(µ) 1 m (µ) m2(µ) a := 3 , b := 4 − 3 − , d := 4 − 3 , n N. (2.3) n n n √n n n ∈ Furthermore, let e := (1 b )/√1 d and let I and I denote intervals of the form n − n − n n n∗ 2 ε 2 ε I := x R : x a n1 , I := x R : x a n1 . (2.4) n ∈ | − n| ≤ e − n n∗ ∈ | − n| ≤ e −r n n n o n n o In the sequel we denote by θ a real-valued quantity such that θ 1. | | ≤ We have derived an asymptotic expansion of p (x) for bounded free random variables n X ,X ,... in the paper [17]. Improving the methods of this paper and [18] we obtain an 1 2 asymptotic expansion of p (x) for the case m (µ) < . Denote by v (x) the function n 4 n ∞ 1 1 1 v (x) = 1+ d a2 a x b a2 x2 p (e x), x R. (2.5) n 2 n − n − n − n − n − n − n w n ∈ (cid:16) (cid:16) (cid:17) (cid:17) Theorem 2.1. Let m (µ) < and m (µ) = 0, m (µ) = 1. Then there exist sequences 4 1 2 ∞ ε and ε such that n1 n2 { } { } p (x+a ) = v (x)+ρ (x)+ρ (x), x I a , (2.6) n n n n1 n2 ∈ n∗ − n where, for x I a , ∈ n∗ − n ε c(µ) n1 ρ (x) , (2.7) | n1 | ≤ n (4 (e x)2)3/2 n − and for x (I a ) (I a ), ∈ n − n \ n∗ − n ε c(µ) n1 ρ (x) . (2.8) | n1 | ≤ r n (4 (e x)2)1/2 n − In (2.6) ρ (x) is a continuous function such that n2 0 ρ (x) c(µ) and ρ (x)dx = o(1/n2). (2.9) n2 n2 ≤ ≤ Z In an − Moreover, ε n2 p (x)dx . (2.10) n Z ≤ n R In \ Corollary 2.2. Let m (µ) < and m (µ) = 0, m (µ) = 1, then 4 1 2 ∞ 2 m (µ) ε 3/4 1 3 n1 p (x) p (x) dx = | | +c(µ)θ + . (2.11) n w ZR| − | π√n (cid:16)(cid:16) n (cid:17) n(cid:17) In [17] we proved analogous results for bounded free random variables and in [18] assuming a finite moment of order eight. Recall that, if the random variable X has density f, then the classical entropy of a distribution of X is defined as h(X) = f(x)logf(x)dx, provided the positive part R − of the integral is finite. Thus we have h(RX) [ , ). A much stronger statement ∈ −∞ ∞ than the classical CLT – the entropic central limit theorem – indicates that, if for some 4 G. P.Chistyakov and F. G¨otze n , or equivalently, for all n n , Y from (2.1) have absolutely continuous distributions 0 0 n ≥ with finite entropies h(Y ), then there is convergence of the entropies, h(Y ) h(Y), n n → as n , where Y is a standard Gaussian random variable. This theorem is due to → ∞ Barron [3]. Artstein, Bally, Barthez, and Naor [2] have solved an old question raised by Shannon about the monotonicity of entropy under convolution. The relative entropy D(X) = D(X Z) = h(Z) h(X), || − where the normal random variable Z have the same mean and the same variance as X, is nonnegative and serves as kind of a distance to the class of normal laws. Thus, the entropiccentral limittheoremmaybereformulatedasD(Y ) 0, aslongasD(Y ) < + n ↓ n0 ∞ for some n . 0 Recently Bobkov, Chistyakov and Go¨tze [12] found the rate of convergence to zero of D(Y ) and for the random variables X with E X k < , k 4, have obtained an n | | ∞ ≥ Edgeworth-type expansion of D(Y ) as n . n Let ν be a probability measure onR. W→e a∞ssume below that m (ν) = 0 and m (ν) = 1. 1 2 The quantity 3 1 χ(ν) = log x y ν(dx)ν(dy)+ + log2π, Z Z | − | 4 2 R R × called free entropy, was introduced by Voiculescu in [35]. Free entropy χ behaves like the classical entropy h. In particular, the free entropy is maximized by the standard semicircular law w with the value χ(w) = 1 log2πe among all probability measures with 2 variance one [20], [36]. Shlyakhtenko [29] has proved that χ(µ ) decreases monotonically, n i.e., the Shannon hypothesis holds in the free case as well. Wang [38] has proved a free analogue of Barron’s result: the free entropy χ(µ ) con- n verges to the semicircular entropy. As in the classical case a relative free entropy D(ν µ ) = χ(µ ) χ(ν) w w || − is nonnegative and serves as kind of a distance to the class of semicircular laws. Wederive anoptimalrateofconvergence inthefreeCLTforfreerandomvariableswith a finite moment of order four. In previous results [17] we showed an analogous result for bounded free random variables and in [18] for free random variables with a finite moment of order eight. Corollary 2.3. Let m (µ) < and m (µ) = 0, m (µ) = 1. Then, for every fixed 4 1 2 ∞ 1 < q 1.01, ≤ m2(µ) ε 1+ 1 ε D(µ µ ) = 3 +θ c(µ,q) n1 2q +c(µ) n2 , (2.12) n w || 6n n n (cid:16) (cid:16) (cid:17) (cid:17) where c(µ,q) > 0 is a constant depended on µ and q only. Hence the remainder term in (2.12) is of order o(n 1) provided that m (µ) < . In − 4 ∞ Sections 5 and 8 we explicitly describe the sequences ε and ε . If we assume that n1 n2 { } { } m (µ) < , then it follows from Remarks 8.13 and 8.14 (see the end of Section 8) that 6 ∞ the remainder term in (2.12) is of order O(n 3/2). − Rate of convergence 5 Given a random variable X with an absolutely continuous density f, the Fisher infor- mation of X is defined by I(X) = +∞ ff′((xx))2 dx, where f′ denotes the Radon-Nikodym derivative of f. In all other cases, Rle−t∞I(X) = + . With the first two moments of X ∞ being fixed, I(X) is minimized for the normal random variable Z with the same mean and the same variance as X, i.e. I(X) I(Z) (which is a variant of Cram´er-Rao’s inequality). ≥ Baron and Johnson have proved in [4] that I(Y ) I(Z), as n , if and only if n → → ∞ I(Y ) < . In classical probability and statistics the relative Fisher information n0 ∞ I(X Z) = I(X) I(Z) || − is used as a strong measure of the probability distribution of X being near to the Gaussian distribution. The result of Baron and Johnson is equivalent to the fact that I(Y Z) 0 n || → as n , if and only if I(Y Z) < . → ∞ n0|| ∞ Bobkov, Chistyakov and Go¨tze [13] found the rate of convergence to zero of I(Y Z) n and for the random variables X with E X k < , k 4, have obtained an Edgewo|r|th- | | ∞ ≥ type expansion of I(Y Z) as n . n Suppose that the me|a|sure ν h→as ∞a density p in L3(R). Then, following Voiculescu [36], the free Fisher information is 4π2 Φ(ν) = p(x)3dx. 3 Z R It is well-known that Φ(µ ) = 1. The free Fisher information has many properties w analogous to those of classical Fisher information. These include the free analog of the Cram´er-Rao inequality. Assume now that m (ν) = 0 and m (ν) = 1. Consider the free relative Fisher infor- 1 2 mation Φ(ν µ ) = Φ(ν) Φ(µ ) 0 w w || − ≥ as a strong measure of closeness of ν to Wigner’s semicircle law. Here we obtain an Edgeworth-type expansion for free random variables with a finite moment of order four. Corollary 2.4. Let m (µ) < and m (µ) = 0, m (µ) = 1. Then 4 1 2 ∞ m2(µ) ε +ε Φ(µ µ ) = p (x)3dx Φ(µ ) = 3 +c(µ)θ n1 n2. (2.13) n w n w || ZR − n n As in the formula (2.12) the remainder term here is of order o(n 1) if m (µ) < and − 4 ∞ of order O(n 3/2) provided that m (µ) < . − 6 ∞ In contrast to the classical case (see [12] and [13]) we expect that the asymptotic expansion in (2.12) and (2.13) holds with an error of order n 1 only. − 3. Auxiliary results We need results about some classes of analytic functions (see [1], Section 3. 6 G. P.Chistyakov and F. G¨otze The class (Nevanlinna, R.) is the class of analytic functions f(z) : C+ z : z N → { ℑ ≥ 0 . For such functions there is an integral representation } 1+uz 1 u f(z) = a+bz+ τ(du) = a+bz+ (1+u2)τ(du), z C+, Z u z Z (cid:16)u z − 1+u2(cid:17) ∈ R − R − (3.1) where b 0, a R, and τ is a non-negative finite measure. Moreover, a = f(i) and τ(R) = ≥f(i) ∈b. From this formula it follows that f(z) = (b + o(1))z foℜr z C+ ℑ − ∈ such that z / z stays bounded as z tends to infinity (in other words z non |ℜ | ℑ | | → ∞ tangentially to R). Hence if b = 0, then f has a right inverse f( 1) defined on the region − Γ := z C+ : z < α z,6 z > β for any α > 0 and some positive β = β(f,α). α,β { ∈ |ℜ | ℑ ℑ } A function f admits the representation ∈ N σ(du) f(z) = , z C+, (3.2) Z u z ∈ R − where σ is a finite non-negative measure, if and only if sup yf(iy) < . Moreover σ(R) = lim iyf(iy). y≥1| | ∞ y + − → ∞ For µ , consider its Cauchy transform G (z) µ ∈ M µ(du) G (z) = , z C+. (3.3) µ Z z u ∈ R − The measure µ can be recovered from G (z) as the weak limit of the measures µ 1 µ (dx) = G (x+iy)dx, x R, y > 0, y µ −πℑ ∈ as y 0. If the function G (z) is continuous at x R, then the probability distribution µ ↓ ℑ ∈ function D (t) = µ(( ,t)) is differentiable at x and its derivative is given by µ −∞ D (x) = G (x)/π. (3.4) µ′ −ℑ µ This inversion formula allows to extract the density function of the measure µ from its Cauchy transform. Following Maassen [24] and Bercovici and Voiculescu [8], we shall consider in the fol- lowing the reciprocal Cauchy transform 1 F (z) = . (3.5) µ G (z) µ The corresponding class of reciprocal Cauchy transforms of all µ will be denoted by ∈ M . This class coincides with the subclass of Nevanlinna functions f for which f(z)/z 1 aFs z non tangentially to R. → → ∞ The following lemma is well-known, see [1], Th. 3.2.1, p. 95. Rate of convergence 7 Lemma 3.1. Let µ be a probability measure such that m = m (µ) := ukµ(du) < , k = 0,1,...,2n, n 1. (3.6) k k Z ∞ ≥ R Then the following relation holds 1 m m zlim z2n+1(cid:16)Gµ(z)− z − z21 −···− z22nn−1(cid:17) = m2n (3.7) →∞ uniformly in the angle δ argz π δ, where 0 < δ < π/2. ≤ ≤ − Conversely, if for some function G(z) the relation (3.7) holds with real numbers ∈ N m for z = iy,y , then G(z) admits the representation (3.3), where µ is a probability k → ∞ measure with moments (3.6). As shown before, F (z) admits the representation (3.1) with b = 1. From Lemma 3.1 µ the following proposition is immediate. Proposition 3.2. In order that a probability measure µ satisfies the assumption (3.6), where m (µ) = 0, it is necessary and sufficient that 1 τ(du) F (z) = z + , z C+, (3.8) µ ZR u z ∈ − where τ is a nonnegative measure such that m (τ) < . Moreover 2n 2 − ∞ [k/2] m (µ) = m (τ)...m (τ), k = 2,...,2n. (3.9) k s1 sl X X l=1 s1+···+sl=k−2,sj≥0 Voiculescu [35] showed for compactly supported probability measures that there exist uzniqCue+.fuUnscitnigonSspZei1c,hZer2’s∈coFmbsiuncahtotrhiaaltaGppµr1o⊞aµc2(hz)[30=] tGoµf1r(eZen1e(zss),)B=ianGeµ[21(1Z]2p(zr)o)vefdorthailsl ∈ result in the general case. Bercovici and Belinschi [6], Belinschi [7], Chistyakov and Go¨tze [15], proved, using complex analytic methods, that there exist unique functions Z (z) and Z (z) in the class 1 2 such that, for z C+, F ∈ z = Z (z)+Z (z) F (Z (z)) and F (Z (z)) = F (Z (z)). (3.10) 1 2 − µ1 1 µ1 1 µ2 2 The function F (Z (z)) belongs again to the class and there exists µ such that µ1 1 F ∈ M F (Z (z)) = F (z), where F (z) = 1/G (z) and G (z) is the Cauchy transform as in µ1 1 µ µ µ µ (3.3). The measure µ depends on µ and µ only and µ = µ ⊞µ . 1 2 1 2 Specializing to µ = µ = = µ = µ write µ ⊞ ⊞µ = µn⊞. The relation (3.10) 1 2 n 1 n ··· ··· admits the following consequence (see for example [15], Section 2, Corollary 2.3). Proposition 3.3. Let µ . There exists a unique function Z such that ∈ M ∈ F z = nZ(z) (n 1)F (Z(z)), z C+, (3.11) µ − − ∈ and F (z) = F (Z(z)). µn⊞ µ 8 G. P.Chistyakov and F. G¨otze Using the representation (3.1) for F (z) we obtain µ (1+uz)τ(du) F (z) = z + F (i)+ , z C+, (3.12) µ µ ℜ Z u z ∈ R − where τ is a nonnegative measure such that τ(R) = F (i) 1. Denote z = x+iy, where µ x,y R. We see that, for z > 0, ℑ − ∈ ℑ (1+u2)τ(du) nz (n 1)F (z) = y 1 (n 1)I (x,y) , where I (x,y) := . ℑ(cid:16) − − µ (cid:17) (cid:16) − − µ (cid:17) µ Z (u x)2 +y2 R − For every real fixed x, consider the equation y 1 (n 1)I (x,y) = 0, y > 0. (3.13) µ − − (cid:16) (cid:17) Since y I (x,y), y > 0, is positive and monotone, and decreases to 0 as y , it is µ 7→ → ∞ clear that the equation (3.13) has at most one positive solution. If such a solution exists, denote it by y (x). Note that (3.13) does not have a solution y > 0 for any given x R n if and only if I (x,0) 1/(n 1). Consider the set S := x R : I (x,0) 1/(n ∈1) . µ µ ≤ − { ∈ ≤ − } We put y (x) = 0 for x S. We proved in [17], Section 3, p.13, that the curve γ given n n by the equation z = x+∈iy (x), x R, is continuous and simple. n ∈ Consider the open domain D˜ := z = x+iy, x,y R : y > y (x) . n n { ∈ } Lemma 3.4. Let Z be the solution of the equation (3.11). The function Z(z) maps ∈ F C+ conformallyonto D˜ . Moreover the function Z(z), z C+, is continuous up to the real n ∈ axis and it establishes a homeomorphism between the real axis and the curve γ . n This lemma was proved in [17] (see Lemma 3.4). The following lemma was proved as well in [17] (see Lemma 3.5). Lemma 3.5. Let µ be a probability measure such that m (µ) = 0,m (µ) = 1. Assume 1 2 that u2µ(du) 1/10 for some positive integer n 103. Then the following u>√(n 1)/8 ≤ ≥ inequRa|li|ty ho−lds Z(z) (n 1)/8, z C+, (3.14) | | ≥ − ∈ where Z is the solution of the eqpuation (3.11). ∈ F The next lemma was proved in [24] and [38]. Lemma 3.6. There exists a unique probability measure ν such that such that F (z) = µ z −Gν(z), z ∈ C+, and, for every n ≥ 1, Fµn(z) = z −Gνn−1⊞wt(z), z ∈ C+ ∪R, where the measure ν is given by dν (x) = dν(√nx) and t = t(n) = (n 1)/n. n 1 n 1 − − − Biane [11] gave the following bound. CLe+mmRa. 3.7. Fix t > 0 and the probability measure ν. Then |Gν⊞wt(z)| ≤ t−1/2, z ∈ ∪ Rate of convergence 9 4. Free Meixner measures Consider the three-parameter family of probability measures µ : a R,b < 1,d < a,b,d { ∈ 1 with the reciprocal Cauchy transform } 1 1 = a+ (1+b)(z a)+ (1 b)2(z a)2 4(1 d) , z C, (4.1) G (z) 2 − − − − − ∈ µa,b,d (cid:16) p (cid:17) which we will call the free centered (i.e. with mean zero) Meixner measures. In this formula we choose the branch of the square root determined by the condition z > 0 ℑ implies (1/G (z)) 0. These measures are counterparts of the classical measures ℑ µa,b,d ≥ discovered by Meixner [25]. The free Meixner type measures occurred in many places in the literature, see for example [14], [28]. Saitoh and Yoshida [28] have proved that the absolutely continuous part of the free Meixner measure µ ,a R,b < 1,d < 1, is given by a,b,d ∈ 4(1 d) (1 b)2(x a)2 − − − − , (4.2) p 2πf(x) when a 2√1 d/(1 b) x a+2√1 d/(1 b), where − − − ≤ ≤ − − f(x) := bx2 +a(1 b)x+1 d; − − Saitoh and Yoshida proved as well that for 0 b < 1 the (centered) free Meixner measure µ is ⊞-infinitely divisible. ≤ a,b,d As we have shown in [17], Section 4, it follows from Saitoh and Yoshida’s results that the probability measure µ with the parameters a ,b ,d from (2.3) is ⊞- an,bn,dn n n n infinitely divisible and it is absolutely continuous with a density of the form (4.2) where a = a ,b = b ,d = d for sufficiently large n n (µ). n n n 1 ≥ 5. Passage to measures with bounded supports Let us assume that µ and m (µ) < . In addition let m (µ) = 0 and m (µ) = 1. 4 1 2 ∈ M ∞ ByProposition3.3, thereexists Z(z) such that(3.11) holds, andF (z) = F (Z(z)). ∈ F µn⊞ µ Hence F (z) = F (√nS (z))/√n, z C+, where S (z) := Z(√nz)/√n. Since m (µ) = µn µ n ∈ n 1 0, m (µ) = 1 and m (µ) < , by Proposition 3.2, we have the representation 2 4 ∞ τ(du) F (z) = z + , z C+, (5.1) µ Z u z ∈ R − where τ is a nonnegative measure such that τ(R) = 1 and m (τ) < . 2 Denote, for n N, ∞ ∈ 1 η(n;τ) := inf g (ε;τ), where g (ε;τ) = ε+ u2τ(du). 0<ε≤10−1/2 n n m2(τ)ε2 Zu>ε√n 1 | | − It is easy to see that 0 < η(n;τ) 11 and η(n;τ) 0 monotonically as n . Let ≤ → → ∞ δ (0,10 1/2] be a point at which the infimum of the function g (ε;τ) is attained. This n − n ∈ 10 G. P.Chistyakov and F. G¨otze means that 1 η(n;τ) = δ + u2τ(du). (5.2) n m (τ)δ2 Z 2 n u>δn√n 1 | | − Consider a function τ (du) τ(du) F(z) = z + ∗ := , z C+. (5.3) Z u z Z u z ∈ R − |u|≤δn√n−1 − This function belongs to the class and therefore there exists the probability measure µ∗ such that Fµ∗(z) = F(z), z R.FThe probability measure µ∗ of course depends on n. ∈ Moreoverweconcludefromtheinversionformulathatµ ([ √10δ √n 1, √10δ √n 1]) = ∗ − 3 n − 3 n − 1 for n n (µ). Hence it follows that the support of µ is contained in the interval 1 ∗ ≥ [ 1√n 1, 1√n 1]. By Proposition 3.2, we see as well that m (µ ) = 0 and −3 − 3 − 1 ∗ m (µ) m (µ ) = τ(R [ δ √n 1,δ √n 1]) 2 2 ∗ n n − \ − − − 1 η(n;τ) u2τ(du) c(µ) . (5.4) ≤ δ2(n 1) Z ≤ n 1 n − |u|>δn√n−1 − Moreover m (µ) m (µ ) = m (τ)m (τ) m (τ )m (τ ) 3 3 ∗ 1 0 1 ∗ 0 ∗ | − | | − | m (τ) m (τ ) + m (µ) m (µ ) m (τ ) 1 1 ∗ 2 2 ∗ 1 ∗ ≤ | − | | − || | η(n;τ) η(n;τ) u τ(du)+c(µ) m (τ ) c(µ) ; 1 ∗ ≤ Z | | | | n 1 ≤ √n 1 |u|>δn√n−1 − − (5.5) In the same way m (µ) m (µ ) m (τ) m (τ ) + m (µ) m (µ ) m (τ ) 4 4 ∗ 2 2 ∗ 2 2 ∗ 2 ∗ | − | ≤ | − | | − || | + m (τ) m (τ ) m (τ)+m (τ ) c(µ)η(n;τ). (5.6) 1 1 ∗ 1 1 ∗ | − || | ≤ Here m (τ ), k = 0,1,2, denote moments of the measure τ . k ∗ ∗ Let X ,X ,X ,... be free identically distributed random variables such that (X ) = ∗ 1∗ 2∗ L ∗ µ . Denote µ := ((X + + X )/√n). As before, by Proposition 3.3, there exists ∗ ∗n L 1∗ ··· n∗ W(z) ∈ F such that (3.11) holds with Z = W and µ = µ∗, and F(µ∗)n⊞(z) = Fµ∗(W(z)). Hence Fµ∗(z) = Fµ∗(√nTn(z))/√n, z C+, where Tn(z) := W(√nz)/√n. In the sequel n ∈ we shall need more detailed information about the behaviour of the functions T (z) and n S (z). By Lemma 3.4, these functions are continuous up to the real axis for n n (µ). n 1 Their values for z = x R we denote by T (x) and S (x), respectively. In ≥order to n n ∈ formulate the following results for T (z) we introduce some notations. Denote by M (z) n n the reciprocal Cauchy transform ofthe free Meixner measure µ with theparameters an,bn,dn a ,b and d from (2.3), i.e., n n n 1 M (z) := a + 1+b (z a )+ 1 b 2(z a )2 4 1 d , z C+. n n n n n n n 2(cid:16) − q − − − − (cid:17) ∈ (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.