ebook img

On spectral measures of random Jacobi matrices PDF

0.21 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On spectral measures of random Jacobi matrices

On spectral measures of random Jacobi matrices ∗ Trinh Khanh Duy 6 1 0 2 Abstract n The paper studies the limiting behaviour of spectral measures of random Jacobi u matricesofGaussian,WishartandMANOVAbetaensembles. Weshowthatthespec- J tral measures converge weakly to a limit distribution which is the semicircle distri- 2 bution, Marchenko-Pastur distributions or Kesten-Mckey distributions, respectively. 2 The Gaussian fluctuation around the limit is then investigated. ] Keywords: spectralmeasure;randomJacobimatrix;Gaussianbetaensemble;Wishart R beta ensemble; MANOVA beta ensemble. P AMS MSC 2010: Primary 60B20; Secondary 60F05, 47B36,47B80 . h t a 1 Introduction m [ Three classical random matrix ensembles on the real line, Gaussian beta ensembles, 2 Wishart beta ensembles and MANOVA beta ensembles, are now realized as eigenvalues v of certain random Jacobi matrices. For instance, the following random Jacobi matrices 6 4 whose components are independent and distributed as 1 1 a b (0,2) χ 1 1 (n 1)β 0 N − 01. Hn,β = b1 .a.2. .b.2. ... ∼ √1nβ χ(n−1)β N(.0..,2) χ(n.−..2)β ...      6  b a   χ (0,2) 1  n−1 n  β N      : v are matrix models of (scaled) Gaussian beta ensembles for any β > 0. Here (µ,σ2) i N X denotes the normal (or Gaussian) distribution with mean µ and variance σ2, and χ k r denotes the chi distribution with k degrees of freedom. Namely, the eigenvalues of H a n,β are distributed as Gaussian beta ensembles, n nβ (λ ,...,λ ) ∆(λ)βexp λ2 , 1 n ∝ | | − 4 i ! i=1 X where ∆(λ) = (λ λ ) denotes the Vandermonde determinant. i<j j − i Threespecialvaluesofbeta,β = 1,2and4,correspondtoGaussianorthogonal,unitary Q and symplectic ensembles (GOE, GUE and GSE) in which the above formula describes thejoint distributionof eigenvalues of randommatrices with real, complex andquaternion entries, respectively. As a generalization, Gaussian beta ensembles were originally defined as ensembles of pointson thereallinewhosejointdensity function isgiven as above. They can be also viewed as the equilibrium measure of a one dimensional Coulomb log-gas at the inverse temperature β. Using the idea of tridiagonalizing a GOE matrix, Dumintriu and Eldelman [6] introduced the model H for Gaussian beta ensembles. In the same n,β ∗This work is supported by JSPS Grant-in-Aid for YoungScientists (B) no. 16K17616 1 paper, they also gave a matrix model for Wishart beta ensembles. A model for MANOVA beta ensembles was discovered later by Killip and Nenciu [11]. One of main objects in random matrix theory is to study the limiting behaviour of the empirical distribution of eigenvalues n 1 L = δ , n n λi i=1 X where δ denotes the Dirac measure. For Gaussian beta ensembles, as n tends to infinity, the empirical distributions converge weakly, almost surely, to the semicircle distribution, which is well known as Wigner’s semicircle law. The convergence means that for any bounded continuous function f on R, n 1 L ,f = f(λ ) sc,f almost surely as n , n i h i n → h i → ∞ i=1 X with sc denoting the semicircle distribution, a probability measure supported on [ 2,2] − withdensity sc(x) = (2π) 1√4 x2. Afluctuation aroundthelimit was alsoinvestigated. − − To be more precise, it was shown that for a ‘nice’ test function f, n n( L ,f µ ,f ) = (f(λ ) µ ,f ) d (0,a2), h n i−h ∞ i i −h ∞ i →N f i=1 X where a2 can be written as a quadratic functional of f. There are several ways to prove f thoseresults. SeeJohansson[10]foranapproachbasedonjointdensityfunction,Dumitriu andEdelman[7]andDumitriuandPaquette [8]foracombinatorial approachbasedonthe random Jacobi matrix models. Since GOE and GUE have their original matrix models, we can see more approaches in books [1, 14]. ThespectralmeasuresofrandomJacobimatrices associatedwiththosebetaensembles have been investigated recently. The weak convergence to a limit distribution, a central limit theorem for moments and large deviations have been established [5, 9, 12]. The spectral measure of a finite Jacobi matrix, a symmetric tridiagonal matrix of the form, a b 1 1 b a b 1 2 2 J =  ... ... ...,(ai ∈ R,bi > 0),    b a  n 1 n  −    is defined to be a unique probability measure µ on R satisfying µ,xk = Jke ,e = Jk(1,1),k = 0,1,..., 1 1 h i h i where e = (1,0,...,0)t Rn. Let λ ,...,λ be the eigenvalues of J and v ,...,v 1 1 n 1 n ∈ { } { } be the corresponding eigenvectors which are chosen to be an orthonormal basis of Rn. Then the spectral measure µ can be written as n µ = q2δ , q = v (1). i λi i | i | i=1 X Notethattheeigenvalues λ aredistinctandtheweights q2 areallpositive. Moreover, { i} { i} a finite Jacobi matrix of size n is one-to-one correspondence with a probability measure supported on n real points. 2 Let µ be the spectral measure of H , n n,β n µ = q2δ . n i λi i=1 X Inthiscase, andinallthreebetaensemblesinthepaper,theweights q2 areindependent { i} ofeigenvaluesandhaveDirichletdistributionwithparameterβ/2. Thedistributionof q2 { i} is the same as that of the vector χ2 χ2 β,1 β,n ,..., , n χ2 n χ2 i=1 β,i i=1 β,i! where χ2 n is an i.i.d. sequePnce of randomPvariables having chi-squared distributions { β,i}i=1 with β degrees of freedom. In connection with empirical distributions, Nagel [13] showed that as n tends to infinity, the Kolmogorov distance between L and µ converges almost n n surely to 0. Thus the spectral measures and the empirical distributions converge to the same limit. Moreover, the limiting behaviour of spectral measures of Jacobi matrices can be read off from the convergence of their entries. For Gaussian beta ensembles, it is clear that 0 1 H 1 0 1 =:J , almost surely as n . n,β   free → .. .. .. → ∞ . . .     Here the convergence means the piecewise convergence of entries. The non-random Ja- cobi matrix J is called the free Jacobi matrix whose spectral measure is nothing but free the semicircle distribution [15, Section 1.10]. Consequently the spectral measures µ , n and hence, the empirical distributions L , converge to the semicircle distribution. Thus n studying spectral measures may provide us a new look on classical results. A natural problem now is to study the fluctuation of spectral measures around the limit, or a central limit theorem for µ ,f with a ‘nice’ function f. Ina work which is not n h i so related to random matrix theory, Dette and Nagel [5] derived a central limit theorem for moments µ ,xk of spectral measures. The result covers Gaussian, Wishart beta n {h i} ensembles and MANOVA beta ensembles with fixed parameters. The aim of this paper is to reconsider the central limit theorem. We propose a universal approach which can be easily applied to all three beta ensembles. The idea is that for a polynomial test function, when n is large enough, µ ,p = p(H )(1,1) is a polynomial of finite variables. Then n n,β h i the central limit theorem follows from the limiting behaviour of entries of Jacobi matrices. Furthermore, by a relation between spectral measures and empirical measures, we obtain an explicit formula for the limit variance and can extend the central limit theorem to a large class of test functions. Our main result for Gaussian beta ensembles can be stated as follows. Theorem 1.1. (i) The spectral measures µ converge weakly, almost surely, to the n semicircle distribution as n , that is, for any bounded continuous function → ∞ f, n µ ,f = q2f(λ ) sc,f a.s. as n . h n i i i → h i → ∞ i=1 X (ii) For a function f with continuous derivative of polynomial growth, √nβ ( µ ,f E[ µ ,f ]) d (0,σ2(f)) as n , n n √2 h i− h i →N → ∞ 3 where σ2(f) = sc,f2 sc,f 2 = Var [f]. Here ‘ d ’ denotes convergence in sc h i − h i → distribution or weak convergence of random variables. The results for Wishart and MANOVA beta ensembles are analogous where the semicircle distribution is replaced by Marchenko-Pastur distributions and Kesten-Mckey distribu- tions, respectively. The paper is organized as follows. In the next section, we consider general random Jacobi matrices and derive the weak convergence of spectral measures as well as the central limit theorem for polynomial test functions. Applications to Gaussian, Wishart and MANOVA beta ensembles are then investigated in turn. The last section is devoted to extend the central limit theorem to a larger class of test functions. 2 Limiting behaviour of spectral measures of random Jacobi matrices Let us begin by introducing some spectral properties of (non random) Jacobi matrices. A semi-infinite Jacobi matrix is a symmetric tridiagonal matrix of the form a b 1 1 J = b1 a2 b2 , where ai R,bi > 0. ... ... ... ∈     To a Jacobi matrix J, there exists a probability measure µ such that µ,xk = xkdµ = Jke ,e ,k = 0,1,..., 1 1 h i R h i Z where e = (1,0,...,)t ℓ2. Then µ is unique, or µ is determined by its moments, if and 1 ∈ only if, J is an essentially self-adjoint operator on ℓ2. If the parameters a and b are i i { } { } bounded, or more generally, if b 1 = , then J is essentially self-adjoint [15, Corollary −i ∞ 3.8.9]. In case of uniqueness, we call µ the spectral measure of J, or of (J,e ). See [4, 1 P Chapter 2] or [15, Section 3.8] for more details on Jacobi matrices. When J is a finite Jacobi matrix of order n, then the spectral measure µ is supported on the eigenvalues λ of J with weights q2 = v (1)2 , { i} { i} { i } n µ = q2δ . i λi i=1 X Here v ,...,v are the correspondingeigenvectors which arechosen to bean orthogonal 1 n { } basis of Rn. The eigenvalues λ n are distinct and the weights q2 n are all positive { i}i=1 { i}i=1 [4, Proposition 2.40]. We are now in a position to study the convergence of spectral measures of Jacobi ma- trices. Recall that spectral measures are defined by their moments. Does the convergence of moments imply the weak convergence of probability measures? The following lemma gives us the answer. It is a classical result which can be found in some textbooks in probability theory. Lemma 2.1. Assume that µ and µ are probability measures on R such that for all { n}∞n=1 k = 0,1,..., µ ,xk µ,xk as n . n h i→ h i → ∞ 4 Assume further that the measure µ is determined by its moments. Then µ converges n weakly to µ as n . Moreover, if f is a continuous function of polynomial growth, that → ∞ is, there is a polynomial P such that f(x) P(x) for all x R, then we also have | | ≤ ∈ µ ,f µ,f as n . n h i→ h i → ∞ Proof. The first part of this lemma is a well-known moment problem [2, Theorem 30.2]. The second part follows by a truncated argument. For the sake of completeness, we give proof here. Assume that the sequence µ converges weakly to µ and that µ ,P con- n n { } h i verges to µ,P for all polynomials P. Let f be a continuous function which is dominated h i by a polynomial P, f(x) P(x) for all x R. For M > 0, write f for the truncated M | | ≤ ∈ function M, if f(x) M, − ≤ − f (x) = f(x), if f(x) M, M  | |≤ M, if f(x) M. ≥ Then it is clear that f fM P PM, where PM is the truncated function of P. Thus | − |≤ − by the triangle inequality, µ ,f µ,f µ ,f f + µ ,f µ,f + µ,f f n n M n M M M |h i−h i|≤ |h − i| |h i−h i| |h − i| µ ,P P + µ ,f µ,f + µ,P P n M n M M M ≤ h − i |h i−h i| h − i = µ ,P µ ,P + µ ,f µ,f + µ,P P . n n M n M M M h i−h i |h i−h i| h − i Asn ,thefirsttermconvergesto µ,P bytheassumption,thesecondtermconverges → ∞ h i to µ,P and the third term converges to 0 because P and f are bounded continuous M M M h i functions. Therefore limsup µ ,f µ,f 2 µ,P P . n M |h i−h i| ≤ h − i n →∞ Finally, by letting M , µ,P P 0 by the monotone convergence theorem. The M → ∞ h − i→ lemma is proved. To random probability measures, we deal with two types of convergence, almost sure convergence and convergence in probability. A result for almost sure convergence is a direct consequence of the above deterministic result. However, it is not the case for convergence in probability. When the limit measure has compact support, the following result on convergence in probability of random measures can be derived by a method of polynomials approximation, see subsection 2.1.2 in [1], for instance. Lemma 2.2. Let µ be a sequence of random probability measures and µ be a non- { n}∞n=1 random probability measure which isdetermined byitsmoments. Assume that anymoment of µ converges almost surely to that of µ, that is, for any k = 0,1,..., n µ ,xk µ,xk a.s. as n . n h i→ h i → ∞ Then as n , the sequence of measures µ converges weakly, almost surely, to µ, n → ∞ { } namely, for any bounded continuous function f, µ ,f µ,f a.s. as n . n h i→ h i → ∞ The convergencestillholds foracontinuousfunctionf ofpolynomial growth. Ananalogous result holds for convergence in probability. 5 Proof. The case of almost sure convergence is a direct consequence of Lemma 2.1. Indeed, for k 1, let ≥ A = ω : µ (ω),xk µ,xk as n . k n { h i → h i → ∞} Then P(Ak) = 1 by the assumption. Therefore P(A := ∞k=1Ak) = 1. Applying Lemma 2.1 to the sequence of probability measures µ (ω) , for ω A, yields the desired n { } T ∈ result. Next we consider the case of convergence in probability. The idea here is to use the following criterion for convergence in probability [2, Theorem 20.5]: a sequence X n { } converges to X in probability if and only if for every subsequence X , there is a n(m) { } further subsequence X that converges almost surely to X. Let f be a continuous { n(mk)} function which is dominated by some polynomial. Given a subsequence n(m) , the aim { } now is to find a subsequence n(m ) such that µ ,f converges almost surely to k {h n(mk) i} µ,f . Let n(0,m) = n(m) . For k 1, using the necessary condition in the criterion, h i { } ≥ we can find a subsequence n(k,m) of n(k 1,m) such that { } { − } µ ,xk µ,xk a.s. as n . n(k,m) h i→ h i → ∞ By selecting the diagonal, we get a subsequence n(m ) = n(k,k) for which all moments k { } of µ converge almost surely to the corresponding moments of µ. Consequently, the { n(mk)} sequence µ converges weakly, almost surely, to µ by the first part of this lemma, { n(mk)} which implies that µ ,f µ,f almost surely. The proof is complete. {h n(mk) i}→ h i Let us now explain the main idea of this paper. Consider the sequence of random Jacobi matrices (n) (n) a b 1 1 (n) (n) (n) b a b  1 2 2  J = , n ... ... ...    (n) (n)  b a   n 1 n   −  and let µ be the spectral measure of (J ,e ). Assume that each entry of J converges n n 1 n almost surely to a non random limit as n , that is, for any fixed i, as n , → ∞ → ∞ a(n) a¯ ; b(n) ¯b a.s. (1) i → i i → i Here we require that a¯ and ¯b are non random and ¯b > 0. Assume further that the i i i spectral measure of (J ,e ), denoted by µ , is unique, where J is the infinite Jacobi 1 matrix consisting of a¯∞ and ¯b , ∞ ∞ i i { } { } a¯ ¯b 1 1 J = ¯b1 a¯2 ¯b2 .   ∞ ... ... ...     Thenthemeasureµ isdeterminedbyitsmoments, andhencewegetthefollowingresult. ∞ Theorem 2.3. The spectral measures µ converge weakly, almost surely, to the limit n measure µ as n . ∞ → ∞ Remark 2.4. If in the assumption (1), convergence in probability is assumed instead of almost sure convergence, then the spectral measures µ converge weakly, in probability, n to µ as n . ∞ → ∞ 6 Proof. Let p be a polynomial of degree m. When n is large enough, µ ,p = p(J )(1,1) n n h i (n) (n) is a polynomial of {ai ,bi }i=1,...,⌈m2⌉. Therefore, as n → ∞, µ ,p µ ,p a.s., n h i → h ∞ i which implies the weak convergence of µ by Lemma 2.2. n Next, we consider the second order of the convergence of spectral measures, or a type of central limit theorem. It turns out that the central limit theorem for polynomial test functions is a direct consequence of a joint central limit theorem for entries of Jacobi matrices. Indeed, assume that there are random variables η and ζ defined on the i i { } { } same probability space such that for some fixed r > 0, for any i, as n , → ∞ a˜(n) = nr(a(n) a¯ ) d η , i i − i → i (2) ˜b(n) = nr(b(n) ¯b ) d ζ . i i − i → i Moreover, we assume that the joint weak convergence holds. This means that any finite linear combination of a˜(n) and˜b(n) converges weakly to the corresponding linear combina- i i tion of η and ζ as n , namely, for any real numbers c and d , i i i i → ∞ (c a˜(n)+d˜b(n)) d (c η +d ζ ). i i i i → i i i i finite finite X X For now on, both conditions (1) and (2) will be written in a compact form a¯ ¯b η ζ 1 1 1 1 Jn ≈ ¯b1 .a¯.2 .¯b.2 .. + n1r ζ1 .η.2 .ζ.2 .. . . . . . . .         Let f be a polynomial of 2k variables (a ,...,a ,b ,...,b ). For simplicity, we write 1 k 1 k f(a ,b ) instead of f(a ,...,a ,b ,...,b ). i i 1 k 1 k Lemma 2.5. (i) As n , → ∞ k ∂f ∂f nr f(a(n),b(n)) f(a¯ ,¯b ) (a¯ ,¯b )a˜(n)+ (a¯ ,¯b )˜b(n) P 0. i i − i i − ∂a i i i ∂b i i i → (cid:16) (cid:17) Xi=1(cid:18) i i (cid:19) P Here ‘ ’ denotes the convergence in probability. → (ii) As n , → ∞ k ∂f ∂f nr f(a(n),b(n)) f(a¯ ,¯b ) d (a¯ ,¯b )η + (a¯ ,¯b )ζ . i i − i i → ∂a i i i ∂b i i i (cid:16) (cid:17) Xi=1(cid:18) i i (cid:19) Proof. Write 1 1 a(n) = a¯ + a˜(n);b(n) =¯b + ˜b(n). i i nr i i i nr i Then use the Taylor expansion of f(a(n),b(n)) at (a¯ ,¯b ) with noting that the Taylor ex- i i i i pansion of a polynomial consists of finitely many terms, k 1 ∂f ∂f f(a(n),b(n)) = f(a¯ ,¯b )+ (a¯ ,¯b )a˜(n)+ (a¯ ,¯b )˜b(n) + . i i i i nr ∂a i i i ∂b i i i ∗ i=1(cid:18) i i (cid:19) X X 7 Each term in the finite sum has the following form, ∗ P k c(α,β) (a(n) a¯ )αi(b(n) ¯b )βi, i − i i − i i=1 Y where α and β are non negative integers and k (α +β ) 2. Therefore, when { i} { i} i=1 i i ≥ that term is multiplied by nr, it converges to 0 in distribution, and hence, in probability P by Slutsky’s theorem. By using Slutsky’s theorem again, we see that (ii) is a consequence of (i). The proof is complete. Let p be a polynomial of degree m > 0. Then there is a polynomial of 2 m variables ⌈2⌉ such that for n > m/2, (n) (n) (n) (n) µ ,p = p(J )(1,1) = f(a ,...,a ,b ,...,b ). h n i n 1 ⌈m2⌉ i ⌈m2⌉ Therefore, by Lemma 2.5, we obtain the central limit theorem for polynomial test func- tions. Theorem 2.6. For any polynomial p, nr( µ ,p µ ,p ) converges weakly to a limit n h i−h ∞ i as n . → ∞ (n) (n) Since we do not assume that all moments of a and b are finite, even the { i } { i } expectation of µ ,p , E[ µ ,p ] may not exist. Thus we need further assumptions to n n h i h i ensure the convergence of mean and variance in the central limit theorem above. Our assumptions are based on the following basic result in probability theory (the corollary following Theorem 25.12 in [2]). Lemma 2.7. Assume that the sequence X converges weakly to a random variable X. n { } If for some δ > 0, supE[X 2+δ]< , n | | ∞ n then E[X ] E[X] and Var[X ] Var[X] as n . In general, if X converges to X n n n → → → ∞ in probability and for q 1 and δ > 0, ≥ supE[X q+δ] < , n | | ∞ n then X converges to X in L . n q We make the following assumptions (n) (n) (i) all moments of a and b are finite and in addition, the convergences in (1) { i } { i } also hold in L for all q < , which is equivalent to the following conditions q ∞ supE[a(n) k]< , supE[b(n) k] < , for all k = 1,2,...; (3) | i | ∞ | i | ∞ n n (ii) E[η ]= 0,E[ζ ]= 0, and for some δ > 0, i i supE[a˜(n) 2+δ]< , supE[˜b(n) 2+δ]< . (4) | i | ∞ | i | ∞ n n 8 Lemma 2.8. As n , → ∞ k 2 ∂f ∂f E f(a(n),b(n)) f(a¯ ,¯b ) Var (a¯ ,¯b )η + (a¯ ,¯b )ζ , (5) i i − i i → ∂a i i i ∂b i i i (cid:20)(cid:16) (cid:17) (cid:21) "Xi=1(cid:18) i i (cid:19)# nr E[f(a(n),b(n))] f(a¯ ,¯b ) 0, (6) i i − i i → (cid:16) (cid:17) k ∂f ∂f Var f(a(n),b(n)) Var (a¯ ,¯b )η + (a¯ ,¯b )ζ . (7) i i → ∂a i i i ∂b i i i h i "Xi=1(cid:18) i i (cid:19)# Proof. It is just a direct consequence of Lemma 2.7. We state now a slightly different form of the central limit theorem for polynomial test functions. Theorem 2.9. Under assumptions (1)–(4), for any polynomial p, as n , → ∞ µ ,p µ ,p a.s. and in Lq for all q 1; n h i → h ∞ i ≥ nr( µ ,p E[ µ ,p ]) d ξ (p). n n h i− h i → ∞ Here ξ (p) denotes the limit distribution. Moreover, E[ξ (p)]= 0 and ∞ ∞ n2rVar[ µ ,p ] Var[ξ (p)] as n . n h i → ∞ → ∞ 3 Gaussian beta ensembles or β-Hermite ensembles LetH bearandomJacobimatrixwhoseelementsareindependent(uptothesymmetric n,β constraint) and are distributed as (0,2) χ (n 1)β N − Hn,β = √1nβ χ(n−1)β N(.0..,2) χ(n.−..2)β ... .    χ (0,2)  β N    Then the eigenvalues λ of H have Gaussian beta ensembles [6], that is, i n,β { } n nβ (λ ,λ ,...,λ ) ∆(λ)βexp λ2 . 1 2 n ∝ | | − 4 j ! i=1 X The weights w = q2 are independent of λ and have Dirichlet distribution with { i} { i} { i} parameters (β/2,...,β/2), that is, n β 1 (w1,...,wn−1)∝ wi2− 1{w1+···+wn−1<1,wi>0}, wn =1−(w1+···+wn−1). i=1 Y Recall that the distribution of w is the same as that of the vector i { } χ2 χ2 β,1 β,n ,..., , n χ2 n χ2 i=1 β,i i=1 β,i! with χ2 n being i.i.d. sequePnce of randomPvariables having chi-squared distributions { β,i}i=1 with β degrees of freedom. 9 Lemma 3.1. (i) As k , → ∞ χ k 1 in probability and in L for all q < . q √k → ∞ A sequence {χkn/√kn} converges almost sure to 1, if ∞n=1e−εkn < ∞ for anyε > 0. (ii) As k , P → ∞ χ 1 √k k 1 = (χ √k) d (0, ). k √k − − →N 2 (cid:18) (cid:19) Proof. The convergence in probability and a central limit theorem are standard results. Let us prove the almost sure convergence part. For almost sure convergence, usually there is some relation between random variables in the sequence. If there is no relation, the following criterion, a direct consequence of the Borel-Cantelli lemma, becomes useful. The sequence X converges almost surely to x as n , if for any ε> 0, n → ∞ ∞ P(X x > ε) < . n | − | ∞ n=1 X For the proof here and later for Beta distributions, we use the following bounds (see Lemma 4.1 in [13]) P χ2k 1 > ε 2e−k8ε2, k − ≤ (cid:18)(cid:12) (cid:12) (cid:19) (cid:12) (cid:12) P(Beta(a,b) (cid:12)(cid:12) E[Beta(cid:12)(cid:12)(a,b)] > ε) 4e−1ε228a3a+bb3. | − | ≤ Here Beta(a,b) denotes the beta distribution with parameters a and b. Then the proof follows immediately from the criterion above. Since the Jacobi parameters a ,b of H (β) are independent, it follows that joint i i n { } convergence in distribution holds. Thus we can write 0 1 (0,2) (0, 1) H 1 0 1 + 1 N(0, 1) N(0,22) (0, 1) . n,β ≈  ... ... ... √βn N 2 N... N ...2 ...         ThenonrandomJacobimatrixintheaboveexpressioniscalledfreeJacobimatrix,denoted by J , whose spectral measure is the semicircle distribution free 1 sc(x) = 4 x2,( 2 x 2). 2π − − ≤ ≤ p There are several ways to derive that fact. For example, in the theory of orthogonal polynomials on the real line, it is derived from the relation of Chebyshev polynomials of the second kind [15, Section 1.10]. One can also calculate moments and show that they match moments of the semicircle distribution. However, we introduce here a method to find the spectral measure by calculating its Stieltjes transform, which is applicable to the next two cases. Let J be a Jacobi matrix with bounded coefficients, a b 1 1 J = b1 a2 b2 .   ... ... ...     10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.