ebook img

On the real spectrum of a product of Gaussian random matrices PDF

0.18 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On the real spectrum of a product of Gaussian random matrices

On the real spectrum of a product of Gaussian random matrices Nick Simm∗ 7 1 Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK 0 2 n a J Abstract 1 3 LetXm =G1...Gm denotetheproductofmindependentrandommatricesofsize N×N, with each matrix in theproduct consisting of independentstandard Gaussian ] variables. Denoting by NR(m) the total number of real eigenvalues of Xm, we show R that for m fixed P h. E(NR(m))=r2Nπm +O(log(N)), N →∞. t a This generalizes a well-known result of Edelman et al. [8] to all m>1. Furthermore, m we show that the normalized global density of real eigenvalues converges weakly in [ expectationtothedensityoftherandomvariable|U|mB whereU isuniformon[−1,1] and B is Bernoulli on {−1,1}. This proves a conjecture of Forrester and Ipsen [10]. 1 v Theresults are obtained bythe asymptotic analysis of a certain Meijer G-function. 6 7 1 Introduction 1 9 0 The subject of non-Hermitian random matrix theory can be said to originate in Ginibre’s . 1 1965 paper [13] which introduced three basic random matrix ensembles of interest. These 0 ensemblesconsistofN N matricesofGaussianvariablesoverthe real,complexorquater- 7 nion number systems re×spectively. In the present paper we are exclusively interested in the 1 real Ginibre ensemble, which is defined by the probability density function : v Xi P(G)= 1 exp 1Tr(GGT) (1.1) (2π)N2/2 −2 r (cid:18) (cid:19) a acting on the set of all N N real matrices1 with respect to the Lebesgue measure. An × intriguing feature of the real case is that eigenvalues can be purely real with non-zero probability. Such real eigenvalues were studied by Edelman, Kostlan and Shub [8], who calculated their expected number and asymptotic density in the limit N . Since then →∞ itwasrealizedthatthe realeigenvaluesformaninterestingpointprocessintheirownright, with connections to random dynamical systems [12], annihilating and coalescing Brownian motions [19, 20], and intermediate spectral statistics in quantum chaos [2]. The purpose of the present paper is to study the real eigenvalues of the random matrix product G ...G where for each i = 1,...,m the G are independent and distributed 1 m i ∗[email protected] 1For ease of presentation we will assumethroughout that N iseven, the case N odd can be dealt with similarlyanddoesnotaffect thefinalresults 1 accordingto (1.1). Products ofrandommatricesarecurrently arapidly evolvingfield, with many new developmentsoccurring inthe last 3 or 4 years,see [1] for anoverview. Our first result answers one of the most basic questions regarding a random product matrix: how many of the eigenvalues are real? Theorem 1.1. Let m>0 be a fixed positive integer. Let NR(m) denote the number of real eigenvalues of the random product matrix G ...G . Then we have the estimate 1 m 2Nm E(NR(m))= +O(log(N)), N . (1.2) π →∞ r Anobviousyetintriguingfeature of (1.2)is that form>1 more eigenvaluescongregate onto the real axis. This phenomenon was also discussed recently in the context of the probability that all eigenvalues are real, which was observedto increase monotonically to 1 as m with fixed N [9, 10, 17]. These works were motivated by a recent application of →∞ the real eigenvalues of products to quantum entanglement [15]. Theexpectedvalue(1.2)hasbeenstudiednumericallyin[15,14,7],butthevalueofthe pre-factor 2m/π is not discussed there and, despite its simplicity, appears to be absent from the literature. Such a concise result is quite surprising given that the spectrum of a p product typically depends in a complicated way on the spectra of each individual factor. Whenmisfixed,theasymptoticorderofmagnitudein(1.2)is√N,asisthecaseform=1 [8]. Still for m = 1, Tao and Vu [18] have shown that this persists to independent-entry matrices with more general distributions. The same √N order of magnitude also holds for the expected number of real roots of certain random polynomials [3]. The universality of this so-called ‘√N law’ and physical applications are discussed in [2]. − Next we give a more precise result regarding the global spectral distribution of the real eigenvaluesofthe product matrixG ...G . Denoting these eigenvaluesby λ ,...,λ , 1 m 1 NR(m) the averaged empirical spectral density is defined by the quantity NR(m) ρRN,m(x)=E δN−m/2λj(x) (1.3) j=1 X   anditsappropriatelynormalizedversionish (x)= ρRN,m(x) . ThescalingN−m/2 ensures N,m E(NR(m)) that the eigenvalues remain inside the interval [ 1,1] with high probability. Note that − h (x) is the probability density function of the randomvariable N−m/2λ where U is N,m R UN N independent and uniformly distributed on the set 1,...,N (m) . { } Theorem 1.2. Choose λ uniformly at random from the real eigenvalues of the product N,m matrix G G ...G . Then N−m/2λ d U mB as N , where U is uniform on 1 2 m N,m → | | → ∞ [ 1,1] and B is Bernoulli on 1,1 . In other words, for every bounded and continuous − {− } function f, we have 1 1 f(x)hN,m(x)dx f(x)x m1−1dx, N . (1.4) ZR → 2mZ−1 | | →∞ This result proves a conjecture of Forrester and Ipsen in [10]. It states that the distri- bution of a typical real eigenvalue of the product is asymptotically for large N the same as a typical real eigenvalue of Gm, appropriately symmetrized. For the distribution of all real 1 2 andcomplexeigenvaluesthistypeofrelationbetweenproductandpowermatricesisknown due to techniques coming from free probability [5, 6] and was generalized to non-Gaussian matrices in [16]. However, it is apparently unclear how such techniques can apply to the real spectrum itself or to obtain (1.4). Our proofs of theorems 1.1 and 1.2 follow a unified approach based on the method of moments. Namely, the strategy is to compute the moments M (m)= xkρR (x)dx, k =0,1,2,3,... (1.5) k,N N,m R Z to leading order in N as N . In the particular case k = 0 we obtain M (m) = 0,N E(NR(m)) and consequently (→1.2)∞. Then it will be proved that M (m) (mk+1)−1 k even k,N lim = (1.6) N→∞ M0,N(m) (0 k odd which are the moments of the density on the right-hand side of (1.4). Since this density is uniquely determined by its moments we establish the weak convergence (1.4). This is a popularstrategyinrandommatrixtheoryandgoesbacktoWignerwhousedittoderivethe famoussemi-circlelawforHermitianrandommatriceswithindependententries. Asubtlety hereisthatonenormallyobtainsthemomentsbycomputingthe expectedtracesofpowers. Here the problem is that such traces necessarily involve contributions from the complex eigenvalues and thus are not obviously related to (1.5). For the same reason, techniques coming from free probability do not seem to be of help. Instead our strategy is based on a recent computation of Forrester and Ipsen [10], showing that (1.3) can be written exactly in terms of Meijer G-functions. Then after obtaining a suitable integral representation for (1.5), an asymptotic analysis of this exact formula leads to our main results, theorems 1.1 and 1.2. Acknowledgments I would like to thank Jesper Ipsen and Peter Forrester for helpful discussions and encour- agement. I acknowledge the support of a Leverhulme Trust Early Career Fellowship ECF- 2014-309. 2 Moments and Meijer G-functions Inthissectionwereviewtheresultsforthefinite-N density(1.3)asin[10]anduseittogive an exact formula for the moments (1.5). As this necessarily involves working with Meijer G-functions, we begin with a definition. Definition 2.1. For a set of real parameters a ,...,a ,b ,...,b and z C, the Meijer 1 p 1 q ∈ G-function is defined by the following contour integral Gm,n a1,...,ap z = 1 mj=1Γ(bj −s) nj=1Γ(1−aj +s) zsds (2.1) p,q b1,...,bq 2πi q Γ(1 b +s) p Γ(a s) Zγ jQ=m+1 − j Q j=n+1 j − (cid:0) (cid:12) (cid:1) (cid:12) Q Q The contour γ goes from i to i with all poles of Γ(b s) lying to the right of γ for j − ∞ ∞ − all j =1,...,m and all poles of Γ(1 a +s) lying to the left of γ for all k =1,...,n. k − 3 Functions of Meijer-G type appear very frequently in the study of randommatrix prod- ucts. This might be expected from results for the scalar case: the density function for a product of m independent standard Gaussian variables is proportional to wm(x):=ZRmjm=1dxje−x2j/2δ(x−x1x2...xm)=G0m,m,0(cid:18)0,...,0(cid:12)2xm2(cid:19). (2.2) Y (cid:12) (cid:12) Asimplederivationof (2.2)involvesverifyingthatbothsidesoftheequat(cid:12)ionhavethesame Mellin transform. For the matrix products considered here, the function (2.2) plays the samefundamental rolethatw (x)=e−x2/2 playsinthe analysisofa single Ginibre matrix. 1 In this case it is known that the real eigenvalues form a Pfaffian point process, meaning that all p-point correlation functions of real eigenvalues can be written as p p Pfaffians × involving an explicit 2 2 matrix kernel (see e.g. [11, 4]). Recently this has been shown to × extend to products of random matrices. Theorem 2.2 (ForresterandIpsen[10]). Let N beeven. The real eigenvalues of thematrix product G ...G form a Pfaffian point process with correlation kernel given by 1 m D(x,y) S(x,y) K(x,y)= (2.3) S(y,x) I(x,y) (cid:18)− (cid:19) where N−2 w (x)xj m S(x,y)= (xA (y) A (y)), (2.4) j j+1 (2√2πj!)m − j=0 X A (y)= w (v)sgn(y v)vjdv, (2.5) j m R − Z ∂ D(x,y)= S(x,y) (2.6) −∂y y 1 I(x,y)= S(t,y)dt+ sgn(x y) (2.7) − 2 − Zx In particular, the p-point correlation function of the real eigenvalues satisfies ρ (x ,...,x )=Pf[K(x ,x ) ] (2.8) p 1 p i l i,l=1,...,p Theresultsin[10]alsoincludecorrelationsinvolvingpurelycomplexeigenvalues,butwe will not need them here. From theorem 2.2, we can extract the moments (1.5) explicitly. Lemma 2.3. Let N be even. The moments (1.5) satisfy M (m) = 0 if k is odd, while k,N for k even they are given by M (m)=M(1) (m) M(2) (m) where 2k,N 2k,N − 2k,N N/2−1 2(2j+k)m M(1) (m)=N−mk (a +a ) 2k,N (√π(2j)!)m j+1,j+k+1 j+k+1,j+1 j=0 X (2.9) N/2−2 2(2j+1+k)m M(2) (m)=N−mk (a +a ) 2k,N (√π(2j+1)!)m j+k+2,j+1 j+2,j+k+1 j=0 X 4 Here a is a particular case of the Meijer G-function j,k a =Gm+1,m 3/2−j,...,3/2−j,1 1 j,k m+1,m+1 0,k,...,k 1 Γ(k(cid:16) s)mΓ( 1/2+(cid:12)j(cid:17)+s)m (2.10) (cid:12) = − − (cid:12) ds 2πi s Zγ − where we may take γ = 1/4+iη :η R . {− ∈ } Proof. By theorem 2.2, the unscaled density of real eigenvalues is (2.8) with p = 1 and noting that I(x,x)=0, we obtain N−2 w (x)xj m ρ (x)= (xA (x) A (x)) (2.11) 1 j j+1 (2√2πj!)m − j=0 X where ∞ A (x)= w (y)sgn(x y)yjdy. (2.12) j m − Z−∞ Since w (x) in (2.2) is an even function of x so too is ρ (x). Therefore all odd moments m 1 vanish identically. To compute the even moments, consider the coefficients ∞ ∞ α := w (x)w (y)xj−1yk−1sgn(y x)dxdy. (2.13) j,k m m − Z−∞Z−∞ We multiply both sides of (2.11) by x2k and integrate over R, leading to N−2 1 M (m)=N−mk (α (m)+α (m)) (2.14) 2k,N j+1,j+2k+2 j+2k+1,j+2 (2√2πj!)m j=0 X In obtaining (2.14) we have used that the 2kth moments of the unscaled density ρ (x) are 1 N−mk timesthoseofρR (x)in(1.3). Thequantityα (m)isskew-symmetricanddepends N,m j,k on the parity of j and k. As shown in [9, Proposition 3], we have α (m)=2(j+k−1/2)mGm+1,m 3/2−j,...,3/2−j,1 1 . (2.15) 2j−1,2k m+1,m+1 0,k,...,k (cid:16) (cid:12) (cid:17) Splitting the sum into even and odd values of j and using the skew(cid:12)-symmetry property (cid:12) gives (2.9). We now deduce a useful integral representation for α . This appears without proof in j,k [10]. Lemma 2.4. For j,k 1, the coefficients a in (2.10) admit the integral representation j,k ≥ ∞ dx m−1 ∞ dx (x /x )j−1/2 xk a =Γ(j+k 1/2)m m l l l+1 1 j,k − x x (1+x /x )j+k−1/2 (1+x )j+k−1/2 Z1 m l=1 (cid:20)Z0 l l l+1 (cid:21) 1 Y (2.16) 5 Proof. This follows from writing the product of Gamma functions in (2.10) as an Euler integral Γ(k s)Γ( 1/2+j+s) m m tk−s−1 − − = dt l (2.17) (cid:18) Γ(j+k−1/2) (cid:19) ZRm+ Yl=1 l (1+tl)k+j−1/2 and interchanging m dt with ds. Then one exploits Perron’s formula l=1 l Q 1 u−s 0, 0<u<1 ds= (2.18) 2πiZγ −s (1, u>1 after which (2.16) follows from a suitable change of variables. Since the integration over γ is neither compact nor absolutely integrable, the interchange requires some justification. We restrict γ to the bounded region γ = 1/4+iη : η R where the interchange is R {− | | ≤ } justified by Fubini’s theorem and obtain m tk−1 u−s a =Γ(j+k 1/2)m lim l dsdt ...dt (2.19) j,k − R→∞ZRm+ Yl=1(1+tl)k+j−1/2 ZγR −s 1 m where u := m t . Now if u > 1 we close the γ contour to the right and pick up the l=1 l R pole at s = 0. The error is given by the integration over the large semi-circular contour C := 1/4Q iReiφ :0<φ<π . We have R {− − } u−s π/2 ds 2u1/4 e−Rlog(u)sin(φ)dφ s ≤ (cid:12)ZCR − (cid:12) Z0 (2.20) (cid:12)(cid:12) (cid:12)(cid:12) 1 u−R/2 (cid:12) (cid:12) Cu1/4 − ≤ Rlog(u) If u > 1+ǫ the right hand side of (2.20) is uniformly bounded by u1/4/R and inserting into (2.19) gives zero in the limit R . If 1 < u < 1+ǫ we change variables t = m → ∞ u/(t ...t ) and use the bound (1+u/t)−k−j+1/2 t so that the contribution to (2.19) 1 m−1 ≤ is bounded by m−1 dt 1+ǫ 1 u−R/2 l u−1/4+k−1 − du=O(log(R)/R) (2.21) ZRm+−1 Yl=1 (1+tl)k+j−1/2 Z1 Rlog(u) Now if u < 1, we close the contour to the left where the integrand is analytic. Then the only contribution comes from the integral over C˜ := 1/4+iReiφ : 0 < φ < π which R {− } tends to zero by an identical argument. We have m tk−1 a =Γ(j+k 1/2)m dt l 1 (2.22) j,k − ZRm+ Yl=1 l(1+tl)k+j−1/2 t1...tm>1 and the change of variables t = x , t = x /x for i = 2,...,m leads straightforwardly 1 1 i i i−1 to the representation (2.16). 3 Asymptotic analysis In this section we prove our main results, theorems 1.1 and 1.2. We begin by showing that both theorems are consequences oflemma 2.3 and the followingj asymptotics for the →∞ coefficients in the sums (2.9). 6 Proposition 3.1. Let I (m) denote the m-fold integral in (2.16), that is I (m) = j,k j,k a Γ(j+k 1/2)−m. Then as j j,k − →∞ I (m)=j−m/24−mj2−m(l1+l2−2) a (m)+a (m)j−1/2+O(1/j) (3.1) j+l1,j+l2 0 1,l1,l2 (cid:16) (cid:17) where a (m)=πm/22−m/2−1m−1/2 0 (3.2) a (m)=π(m−1)/22−m/2−1(1/2 l +l )m1/2 1,l1,l2 − 1 2 Before giving the proof, we show how its conclusion quickly implies the main results of the paper. Proof of theorems 1.1 and 1.2. Firstnotethatwemayconsideronlythecontributiontothe sums (2.9) from sufficiently large j > j (m,k) where the asymptotics (3.1) hold, since the 0 sum from j =0 to j is O(N−mk). We have 0 N/2−12(2j+k)mΓ(2j+k+3/2)m M(1) =2N−mk j−m/24−mj2−mk 2k,N (√π(2j)!)m (3.3) jX=j0 (a (m)+a (m)j−1/2+c(j )/j) 0 1,0,0 0 × Stirling’s formula implies N/2−1 M(1) =N−mk jmk2mk(m−1/2+j−1/2π−1/2m1/2+O(1/j)) 2k,N jX=j0 (3.4) N 1 1 1 = m−1/2+ 2Nm/π +O(log(N)) 2 mk+1 2 2mk+1 p A similar computation yields N 1 1 1 M(2) = m−1/2 2Nm/π +O(log(N)) (3.5) 2k,N 2 mk+1 − 2 2mk+1 p Taking the difference ofthese two asymptotic estimates givesthe moments to leading order in N 2Nm/π M = +O(log(N)) (3.6) 2k,N 2mk+1 p Setting k =0 gives the expected number of real eigenvalues 2Nm E(NR(m))= +O(log(N)) (3.7) π r and proves theorem 1.1. Normalizing (3.6) by E(NR)=M0,N shows that the 2kth moment of h (x) converges to 1 which is the 2kth moment of x1/m−1/(2m) on x [ 1,1]. N,m 2mk+1 | | ∈ − This completes the proof of theorem 1.2. Remark3.2. Adisadvantage oftruncatingthesumatj =j isthattheconstantO(1)term 0 in the asymptotics (1.2) is left undetermined. Although we also do not compute the log(N) correction, it seems reasonable it will cancel out of the final results, as we know happens for m = 1 [8]. However proving this requires looking at the next order term in (3.1) which is left for future investigation. 7 Theonlyremainingtaskistodeducetheasymptotics(3.1)ofproposition3.1. Thisturns outtobeastandardapplicationofthesaddlepointmethodformulti-dimensionalintegrals. However, in this case the analysis is complicated by the fact that the saddle point lies on the boundary of the m-dimensional domain of integration. This requires slightly different analysis compared with the typical case of interior points and will lead to an expansion in halfintegerpowersofj insteadofintegerpowersonemightnormallyexpect. Hencewegive a self-contained exposition for our particular application. Proof of proposition 3.1. By definition we have I (m)= ∞ dxm m−1 ∞ dxl (xl/xl+1)j+l1−1/2 xj1+l1 j+l1,j+l2 Z1 xm l=1 (cid:20)Z0 xl (1+xl/xl+1)2j+l1+l2−1/2(cid:21)(1+x1)2j+l1+l2−1/2 Y ∞ = ejΦm(~x)F(~x)dx ...dx (3.8) 1 m Z1 ZRm+−1 where m−1 Φ(~x)=2log(x ) 2log(1+x ) log(x ) 2 log(1+x /x ) (3.9) 1 1 m l l+1 − − − l=1 X and xl1+l2−3/2 m−1 F(~x)= 1 (1+x /x )−l1−l2+1/2x−1 (3.10) xml1−1/2(1+x1)l1+l2−1/2 l=1 l l+1 l+1 Y The saddle point equations for Φ are x2 = x , x x = x2, l = 2,...,m 1 and 1 2 l−1 l+1 l − x =x . It is easily seen that the only solution of this set of equations is ~x=p~ where m−1 m p~:=(1,1,...,1,1). (3.11) The Hessian matrix of Φ at p~ is 1 1/2 0 0 ... 0 0 − 1/2 1 1/2 0 0 ... 0  −  0 1/2 1 1/2 0 ... 0 − H = .. .. .. .. .. ..  (3.12) m  . . . ... . . .     0 ... 0 1/2 1 1/2 0   −   0 ... 0 0 1/2 1 1/2   −   0 0 ... 0 0 1/2 1/2  −    which is clearlynegative so that p~ is the unique maximum. It will be useful in what follows to have the formulae m+1 det( H )= − m 2m (3.13) 2i(j m 1) (H−1) = − − , i j m i,j m+1 ≤ which follow from known facts regarding the determinant and inverse of a symmetric tri- diagonal matrix. 8 The analysis begins by localizing the integral near the saddle point, so that 1+ǫ I ejΦ(~x)F(~x)dx ...dx (3.14) j+l1,j+l2 ∼Z1 Z[1−ǫ,1+ǫ]m−1 1 m up to exponentially small errors. This is justified because away from the unique maximum we have a:= sup Φ(~v)<Φ(p~) (3.15) ~v∈[1+ǫ,∞)×([1−ǫ,1+ǫ]m−1)c so that the integration over the complement of the region in (3.14) is dominated by ∞ eaj F(~x)dx ...dx =eajI (m)=O(eaj) (3.16) Z1 ZRm+−1 1 m l1,l2 which gives rise to an exponentially small contribution for any ǫ>0. By Taylor’s theorem, for sufficiently small ǫ>0, we may expand Φ(~x) near the saddle point. For x~ =~x ~p, we 0 − write 1 Φ(~x)=Φ(p~)+ x~ TH x~ +T (x~ ,~p)+O(x~ 4) 0 m 0 3 0 0 2 | | (3.17) F(~x)=F(p~)+x~ TdF(p~)+O(x~ 2) 0 0 | | whereT (x~ ,p~)isthethirdordertermintheTaylorexpansionofΦ(~x)near~x=p~. Combined 3 0 with a simple bound on the exponential function, we obtain from (3.17) the estimate 1 ejΦ(~x)F(~x)=exp j Φ(p~)+ x~ TH x~ 0 m 0 2 (cid:18) (cid:18) (cid:19)(cid:19) (3.18) F(p~)+x~ TdF(p~)+jF(p~)T (x~ ,p~)+O x~ 6j2eǫj|x~0|2 +j x~ 4+ x~ 2 0 3 0 0 0 0 × | | | | | | (cid:16) (cid:16) (cid:17)(cid:17) where the error is uniform in ~x [1 ǫ,1+ǫ]m and all j > 0. Inserting (3.18) into the ∈ − integral (3.14) and changing variables ~u=j1/2(~x p~) shows that − ∞ Ij+l1,j+l2 =j−m/2ejΦ(p~) e21u~THm~u Z0 ZRm−1 (3.19) F(p~)+j−1/2~uTdF(p~)+j−1/2F(p~)T (~u,p~) d~u+O(j−m/2−1ejΦ(p~)) 3 × (cid:16) (cid:17) Note that ejΦ(u~) = 4−mj which appears in the claimed asymptotics. The quantity T (~u,p~) 3 can be calculated explicitly T (~u,p~)= u3m +m−1 u3p u2pup+1 upu2p+1 (3.20) 3 4 2 − 4 − 4 ! p=1 X Furthermore m−1 3+2(l l ) F(p~)+~uTdF(p~)j−1/2 =2m/2−m(l1+l2) 1 j−1/2 1− 2 u + u (3.21) m p − 4 !! p=1 X 9 Inserting(3.21)and(3.20)into(3.19)showsthatitremainstocomputeorderofmGaussian integrals with some low order polynomials in the integrand. All such integrals can be computed in terms of the following generating function Fp,l(t,s)=Z0∞xlme−x2m/4ZRm−1exp(cid:18)21~xTHm−1~x+µp(t,s)T~x(cid:19)d~x (3.22) whereµ (t,s)isavectorentirelyzeroexceptforentriesµ =t,µ =sandµ =x /2 p p p+1 m−1 m and here ~x=(x ,...,x ). In terms of the partial derivatives 1 m−1 ∂n+m F(n,m) := F (t,s) . (3.23) p,l ∂tn∂sm p,l (cid:12)t=s=0 (cid:12) we can rewrite the terms in the expansion (3.19) in term(cid:12)s of the generating function (3.22) (cid:12) as follows I =j−m/2ejΦ(p~)F(p~)F (0,0) j+l1,j+l2 0,0 m−2 +j−(m+1)/2ejΦ(p~)F(p~) (1/2)F(3,0) (1/4)F(2,1) (1/4)F(1,2) F(1,0) p,0 − p,0 − p,0 − p,0 Xp=1 (cid:16) (cid:17) (3.24) 3+2(l l ) +(1/2)F(3,0) (1/4)F(2,0) (1/4)F(1,0) 1− 2 F(0,0) m−1,0− m−1,1− m−1,2− 4 0,1 F (0,0) F(1,0) + 0,3 +O(j−m/2−1ejΦ(p~)). − m−1,0 4 (cid:19) It remains to compute the generating function and the requiredderivatives. Standardfacts aboutGaussianintegrationallowthe integralin(3.22)overRm−1 tobe done explicitly. We obtain F (t,s)=K G (t,s)I (t,s) where p,l m p,l p,l ∞ I (t,s)= zle−z2/m+tzp/m+sz(p+1)/mdz p,l Z0 (3.25) t2(m p)+s2(p+1)(m (p+1))+2tsp(m (p+1)) G (t,s)=exp − − − p,l m (cid:18) (cid:19) and (2π)m−1 K = =2m−1π(m−1)/2m−1/2 (3.26) m sdet( Hm−1) − Consequently, the derivatives (3.23) to any order can be computed explicitly (most easily in a computer algebra package such as MAPLE). Inserting the results of the computation into (3.24) and using the formula ejΦ(p~) =4−mj gives the result stated in the proposition. References [1] G. Akemann and J. R. Ipsen. Recent exact and asymptotic results for products of independent random matrices. Acta Phys. Polon. B, 46(9):1747–1784,2015. [2] C.W.J.Beenakker,J.M.Edge,J.P.Dahlhaus,D.I.Pikulin,S.Mi,andM.Wimmer. Wigner-Poisson Statistics of Topological Transitions in a Josephson Junction. Phys. Rev. Lett., 111:037001,2013. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.