ebook img

Level-Spacing Distributions and the Bessel Kernel PDF

18 Pages·0.25 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Level-Spacing Distributions and the Bessel Kernel

Level Spacing Distributions and the Bessel Kernel Craig A. Tracy ∗ Department of Mathematics and Institute of Theoretical Dynamics, University of California, Davis, CA 95616, USA Harold Widom † Department of Mathematics, University of California, Santa Cruz, CA 95064, USA Scaling models of random N N hermitian matrices and passing to the limit N leads to × →∞ integraloperatorswhoseFredholmdeterminantsdescribethestatisticsofthespacingoftheeigenval- ues of hermitian matrices of large order. For theGaussian Unitary Ensemble, and for many others aswell,thekerneloneobtainsbyscalinginthe“bulk”ofthespectrumisthe“sinekernel” sinπ(x−y). π(x−y) 9 RescalingtheGUEatthe“edge”ofthespectrumleadstothekernel Ai(x)Ai′(y)−Ai′(x)Ai(y) whereAi x−y 9 is the Airy function. In previous work we found several analogies between properties of this “Airy 9 kernel”andknownpropertiesofthesinekernel: asystemofpartialdifferentialequationsassociated 1 withthelogarithmicdifferentialoftheFredholmdeterminantwhentheunderlyingdomainisaunion n of intervals; a representation of the Fredholm determinant in terms of a Painlev´e transcendent in a the case of a single interval; and, also in this case, asymptotic expansions for these determinants J and related quantities, achieved with the help of a differential operator which commutes with the 9 integraloperator. Inthispaperweshowthattherearecompletelyanalogouspropertiesforaclassof 2 kernelswhicharisewhenonerescalestheLaguerreorJacobiensemblesattheedgeofthespectrum, namely 2 v Jα(√x)√yJα′(√y) −√xJα′(√x)Jα(√y) 3 2(x y) 6 − 40 wchhaenrgeeJ,αt(hze)kisertnheelsBweshsieclhfuanrcisteionwhoefnortdaekrinαg. sIcnaltihnegclaimseistsαi=n t∓h2e1 tbhuelskeobfectohmees,paefctterruma vfaorriatbhlee 0 Gaussian orthogonal and symplectic ensembles. In particular, an asymptotic expansion we derive 3 will generalize ones found by Dyson for theFredholm determinants of these kernels. 9 / h t I. INTRODUCTION AND STATEMENT OF RESULTS - p e A. Introduction h : v Scaling models of random N N hermitian matrices and passing to the limit N leads to integral operators i X whose Fredholm determinants d×escribe the statistics of the spacing of the eigenvalue→s o∞f hermitian matrices of large r order[18,25]. Which integraloperators(or,more precisely, whichkernels of integraloperators)resultdepends on the a matrix model one starts with and at which location in the spectrum the scaling takes place. Forthesimplestmodel,theGaussianUnitaryEnsemble(GUE),andformanyothersaswell(see,e.g.,[16,17,23,24]), the kernel one obtains by scaling in the “bulk” of the spectrum is the “sine kernel” sinπ(x y) − . π(x y) − Precisely, this comes about as follows. If φ (x) is the sequence obtained by orthonormalizing the sequence { k }∞k=0 xke x2/2 over ( , ) and if − { } −∞ ∞ N 1 − K (x,y) = φ (x)φ (y), (1.1) N k k k=0 X ∗e-mail address: [email protected] †e-mail address: [email protected] 1 then in the GUE the probability density that n of the eigenvalues (irrespective of order) lie in infinitesimal intervals about x ,...,x is equal to 1 n R (x ,...,x ) = det (K (x ,x )) . n 1 n N i j i,j=1,...,n The density of eigenvalues at a fixed point z is R (z), and this is √2N/π as N . Rescaling at z leads to the 1 ∼ →∞ sine kernel because of the relation π πx πy sinπ(x y) lim K (z+ ,z+ ) = − . N N √2N √2N √2N π(x y) →∞ − Rescaling the GUE at the “edge” of the spectrum, however, leads to a different kernel. The edge corresponds to z √2N, at which point the density is 212N16, and we have there the scaling limit [3,11,22] ∼ ∼ lim 1 K (√2N + x ,√2N + y ) = Ai(x)Ai′(y) −Ai′(x)Ai(y) N 212N61 N 212N61 212N61 x y →∞ − whereAiistheAiryfunction. Inpreviouswork[28]wefoundseveralanalogiesbetweenpropertiesofthis“Airykernel” and known properties of the sine kernel: a system of partial differential equations associated with the logarithmic differential of the Fredholm determinant when the underlying domain is a union of intervals [15]; a representation of the Fredholm determinant in terms of a Painlev´e transcendent in the case of a single interval [15]; and, also in this case, asymptotic expansions for these determinants and related quantities [5,2,29,6,20], achieved with the help of a differential operator which commutes with the integral operator. (See [27] for further discussion of these properties of the sine kernel.) In this paper we show that there are completely analogous properties for a class of kernels which arise when one rescales the Laguerre or Jacobi ensembles at the edge of the spectrum. For the Laguerre ensemble the analogue of the sequence of functions φ (x) in (1.1) is obtained by orthonormalizing the sequence k { } xkxα/2e−x/2 { } over (0, ) (here α> 1), whereas for Jacobi one orthonormalizes ∞ − xk(1 x)α/2(1+x)β/2 { − } over ( 1,1). (Here α,β > 1.) In the Laguerre ensemble of (positive) hermitian N N matrices the eigenvalue − − × density satisfies [4,23], for a fixed x<1, 1 1 x R (4Nx) − . 1 ∼ 2π x r This limiting law is to be contrasted with the well-known Wigner semi-circle law in the GUE. The new feature here is the “hard edge” for x 0. At this edge we have the scaling limit [11]: ∼ lim 1 K ( x , y ) = Jα(√x)√yJα′(√y) −√xJα′(√x)Jα(√y) N N 4N 4N 4N 2(x y) →∞ − where J (z) is the Bessel function of order α. Both limits follow from the asymptotic formulas for the generalized α Laguerre polynomials. (Scaling in the bulk will just lead to the sine kernel and scaling at the “soft edge,” x 1, will ∼ lead to the Airy kernel.) The same kernel arises when scaling the Jacobi ensemble at 1 or 1. (Recall that in the − Jacobi ensemble both 1 are hard edges; see e.g. [23].) ± For later convenience we introduce now a parameter λ and define our “Bessel kernel” by K(x,y): = λJα(√x)√yJα′(√y) −√xJα′(√x)Jα(√y), (x=y) (1.2a) 2(x y) 6 − λ = J (√x)2 J (√x)J (√x) (x=y). (1.2b) α α+1 α 1 4 − − Before stating our results, we ment(cid:0)ion that in the cases α= 1 we(cid:1)have, when λ=1, ∓2 sin(x y) sin(x+y) 2√xyK(x2,y2) = − , (1.3) π(x y) ± π(x+y) − which are kernels which arisewhen taking scaling limits in the bulk of the spectrum for the Gaussianorthogonaland symplectic ensembles [18]. In particular, an asymptotic expansion we derive will generalize ones found by Dyson [5] for the Fredholm determinants of these kernels. We now state the results we have obtained. 2 B. The System of Partial Differential Equations We set m J := (a ,a ) (a 0) (1.4) 2j 1 2j j − ≥ j=1 [ and write D(J;λ) for the Fredholm determinant of K (the operator with kernel K(x,y)) acting on J. If we think of this as a function of a=(a ,...,a ) then 1 2m 2m dlog D(J;λ) = ( 1)jR(a ,a )da (1.5) j j j − − j=1 X where R(x,y) is the kernel of K(I K) 1. We introduce the notations − − φ(x):= √λJ (√x), ψ(x):= xφ(x) (1.6) α ′ and the quantities q : = (I K) 1φ(a ), p := (I K) 1ψ(a ), (j =1,...,2m) (1.7) j − j j − j − − u: = (φ,(I K) 1φ), v := (φ,(I K) 1ψ), (1.8) − − − − where the inner products refer to the domain J. The differential equations are ∂q q p p q j =( 1)k j k− j k q (j =k) (1.9) k ∂a − a a 6 k j k − ∂p q p p q j =( 1)k j k− j k p (j =k) (1.10) k ∂a − a a 6 k j k − ∂q 1 q p p q a j =p + q u ( 1)ka j k− j k q (1.11) j j j k k ∂a 4 − − a a j j k k=j − X6 ∂p 1 1 q p p q a j = (α2 a +2v)q p u ( 1)ka j k− j k p (1.12) j j j j k k ∂a 4 − − 4 − − a a j j k k=j − X6 ∂u =( 1)jq2 (1.13) ∂a − j j ∂v =( 1)jp q . (1.14) j j ∂a − j Moreover the quantities R(a ,a ) appearing in (1.5) are given by j j (q p p q )2 1 1 a R(a ,a )= ( 1)ka j k− j k +p2 (α2 a +2v)q2+ p q u. (1.15) j j j − k a a j − 4 − j j 2 j j j k k=j − X6 These equations are quite similar to eqs. (1.4)–(1.9) of [28], as is their derivation. C. The ordinary differential equation For the special case J = (0,s) the above equations can be used to show that q(s;λ), the quantity q of the last section corresponding to the endpoint s, satisfies 1 1 d s(q2 1)(sq ) = q(sq )2+ (s α2)q+ sq3(q2 2) (= ). (1.16) ′ ′ ′ ′ − 4 − 4 − ds with boundary condition 3 √λ q(s;λ) sα/2, s 0. (1.17) ∼ 2αΓ(1+α) → This equationis reducible to a specialcase of the P differentialequation;3 explicitly, if q(s)=(1+y(x))/(1 y(x)) V with s = x2, then y(x) satisfies P with α = β = α2/8, γ = 0 and δ = 2. (We have primed the us−ual P V ′ ′ ′ ′ V − − parameterstoavoidconfusionwiththeαinourkernel. WementionthatthisspecialP canbeexpressedalgebraically V intermsofathirdPainlev´etranscendentandits firstderivative[13]. We mentionalsothatanargumentcanbe given that (1.16) must be reducible to one of the 50 canonicaltypes of differentialequations found by Painlev´e,without an explicit verificationbeing necessary. This will be discussed at the end of section II B.) It is sometimes convenientto transform (1.16) by making the substitution q(s) = cosψ(s), so that ψ satisfies 1 1 α2 cosψ ψ + ψ = sin(2ψ) . (1.18) ′′ s ′ 8s − 4s2 sin3ψ The Fredholm determinant is expressible in terms of q by the formula 1 s s D(J;λ) = exp log q(t)2dt . (1.19) −4 t (cid:18) Z0 (cid:19) Denoting by R(s) minus the logarithmic derivative of D(J;λ) with respect to s, we have also the representation 1 dψ 2 α2 R(s)= cos2ψ(s)+s cot2ψ(s). (1.20) 4 ds − 4s (cid:18) (cid:19) Furthermore, R(s) itself satisfies a differential equation which in the Jimbo-Miwa-Okamoto σ notation for Painlev´e III (see, in particular, (3.13) in [14]) is (sσ′′)2+σ′(σ sσ′)(4σ′ 1) α2(σ′)2 =0 (1.21) − − − where σ(s)=sR(s); it has small s expansion 1 3+2α σ(s;λ)=c s1+α 1 s+ s2+ α − 2(2+α) 16(1+α)(2+α)(3+α) ··· (cid:20) (cid:21) 1 3+2α + c2 s2+2α 1 s+ 1+α α − 2(2+α)2 ··· (cid:20) (cid:21) 1 + c3 s3+3α[1+ ]+ (1.22) (1+α)2 α ··· ··· where λ 1 c = . α 22α+2Γ(1+α)Γ(2+α) We mention that in the special case α = 0 and λ = 1 we have D(J;1) = e s/4, q(s;1) = 1, ψ(s,1) = 0, and − σ(s,1)=s/4 exactly [7,8,11]. 3The Painlev´e V differential equation is d2y 1 1 dy 2 1dy (y 1)2 β′ γ′y δ′y(y+1) = + + − α′y+ + + dx2 2y y 1 dx − xdx x2 y x y 1 (cid:18) − (cid:19)(cid:16) (cid:17) (cid:18) (cid:19) − where α′, β′, γ′, and δ′ are constants. 4 D. Asymptotics AgainwetakeJ =(0,s)andconsiderasymptoticsass . Fromtherandommatrixpointofviewtheinteresting →∞ quantities are ( 1)n ∂n E(n;s) := − D(J;λ) . (1.23) n! ∂λn λ=1 (cid:12) This is the probability that exactly n eigenvalues lie in J. The asym(cid:12) ptotics of E(0,s) = D(J;1) are obtained from (cid:12) (1.19) using the asymptotics of q(s;1) or equivalently σ(s;1) obtained from (1.16) or (1.21), respectively. (Our derivation is heuristic since as far as we are aware the corresponding Painlev´e connection problem has not been rigorously solved.) We find that as s , →∞ e s/4+α√s − E(0;s)=τ (1.24) α sα2/4 α 9α2 3α 51α3 75α2 1275α4 1+ s−12 + s−1+( + )s−23 +( + )s−2+ , × 8 128 128 1024 1024 32768 ··· (cid:18) (cid:19) whereτ isaconstantwhichcannotbedeterminedfromtheasymptoticsofq (orσ)alone. However,aswementioned α above,whenα= 1 this expansionmustagreewiththose obtainedfromformulas(12.2.6)of[18](see also(12.6.17)– ∓2 (12.6.19) in [18]) after replacing s by π2t2. This leads to the conjecture G(1+α) τ = α (2π)α/2 whereGistheBarnesG-function[1]. Thisconjectureisfurthersupportedbynumericalworksimilartothatdescribed for the analogous conjecture in [28]. As in [28], there are two approaches to the asymptotics of E(n;s) for general n. We use the notation E(n;s) r(n;s) := . (1.25) E(0;s) In the first approach (see also [2,27]) one differentiates (1.21) successively with respect to λ. Using the known asymptotics of σ(s;1) and the differential equation (1.21) satisfied by σ(s;λ) for all λ, one can find asymptotic expansions for the quantities ∂nσ σ (s) := , n ∂λn (cid:12)λ=1 (cid:12) and these in turn can be used to find expansions for the r(n(cid:12);s). This approach is inherently incomplete since (cid:12) yet another undetermined constant enters the picture. And there are also computational problems since when one expressesther(n,s)intermsoftheσ (s)alargeamountofcancellationoccurs,withtheresultthateventhefirst-order n asymptotics of r(n;s) are out of reach by this method when n is large. The second approachuses the easily-established identity λ λ r(n;s) = i1··· in (1.26) (1 λ ) (1 λ ) i1<X...<in − i1 ··· − in where λ >λ > are the eigenvalues of the integral operator K with λ=1 acting on (0,s). It turns out that this 0 1 ··· operator, rescaled so that it acts on (0,1), commutes with the differential operator defined by L α2 sx f(x) = (x(1 x)f (x)) ( + )f(x), ′ ′ L − − 4x 4 with appropriate boundary conditions on f. Applying the WKB method to the equation, and a simple relationship betweenthe eigenvaluesofK (asfunctions ofs)andits eigenfunctions,weareabletoderivethefollowingasymptotic formula for the eigenvalues as s : →∞ 1 λi 2π si+α+21 e−2√s24i+2α+2. (1.27) − ∼ Γ(α+i+1)i! 5 From this and (1.26) we deduce n 1 r(n;s) − Γ(α+k+1)k! π−n2−n(2n+2α+1)s−n22−α2ne2n√s. (1.28) ∼( ) k=0 Y For the special case α = 0, the quantity r(1;s) can be expressed exactly in terms of Bessel functions (see (2.30) below). II. DIFFERENTIAL EQUATIONS A. Derivation of the system of equations We shall use two representations for our kernel. The first is just our definition (1.2a) using the notation (1.6), φ(x)ψ(y) ψ(x)φ(y) K(x,y)= − . (2.1) x y − The second is the integral representation 1 1 K(x,y)= φ(xt)φ(yt)dt. (2.2) 4 Z0 This follows from the differentiation formula zJα′(z) = αJα(z) − zJα+1(z), which gives the alternative representation √xJα+1(√x)Jα(√y) Jα(√x)√yJα+1(√y) λ − 2(x y) − for K(x,y), and the Christoffel-Darboux type formula (7.14.1(9)) of [9]. Our derivation will use, several times, the commutator identity [L,(I K) 1] = (I K) 1[L,K](I K) 1, (2.3) − − − − − − which holds for arbitrary operators K and L, and the differentiation formula d dK (I K) 1 = (I K) 1 (I K) 1, (2.4) − − − da − − da − which holds for an arbitrary operator depending smoothly on a parameter a. We shall also use the notations M = multiplication by the independent variable, D = differentiation, and a subscript on an operator indicates the variable on which it acts. It will be convenient to think of our operator K as acting, not on J, but on (0, ) and to have kernel ∞ K(x,y)χ (y) J whereχ isthecharacteristicfunctionofJ. WecontinuetodenotetheresolventkernelofK byR(x,y)andnotethat J it is smooth in x but discontinuous at y =a . The quantities R(a ,a ) appearing in (1.5) are interpreted to mean j j j lim R(a ,y), j y→aj y J ∈ and similarly for p and q in formulas (1.7). The definitions (1.8) of u and v must be modified to read j j u = (φχ ,(I K) 1φ), v = (φχ ,(I K) 1ψ), (2.5) J − − J − − 6 where now the inner products are taken over (0, ). Notice that since ∞ (I K) 1ξ = (I K) 1ξχ in J − − − − J for any function ξ, this agrees with the original definitions (1.8) of u and v. We have, by (2.2), 1 1 ∂ 1 ((MD) +(MD) )K(x,y) = t (φ(xt)φ(yt))dt = φ(x)φ(y) K(x,y). x y 4 ∂t 4 − Z0 But it is easy to see that . [MD,L] = ((MD) +(MD) +I)L(x,y) (2.6) x y . for any operator L with kernel L(x,y), where “=” means “has kernel”. Taking L(x,y)=K(x,y)χ (y) gives J . 1 [MD,K]= φ(x)φ(y)χ (y) ( 1)ka K(x,a )δ(y a ). 4 J − − k k − k X (Recall the form (1.4) of J.) It follows from this and (2.3) that . 1 [MD,(I K) 1] = Q(x)(I Kt) 1χ φ(y) ( 1)ka R(x,a )ρ(a ,y), (2.7) − − 4 − − J − − k k k X where Q(x), and an analogous function P(x), are defined by Q(x) := (I K)−1φ, P(x) := (I K)−1ψ, (2.8) − − where ρ(x,y) = R(x,y)+δ(x y) is the distributional kernel of (I K) 1, and where Kt is the transpose of the − − − operator K. (Note that K takes smooth functions to smooth functions while its transpose takes distributions to distributions.) Observe that q = Q(a ), p = P(a ). j j j j Next we consider commutators with M and use the first representation (2.1) of K(x,y). We have immediately [M,K] = (φ(x)ψ(y) ψ(x)φ(y)) χ (y), − J and so, by (2.3) again, . M,(I K) 1 = Q(x)(I Kt) 1ψχ (y) P(x)(I Kt) 1φχ (y). (2.9) − − − − J − − − J Notice that since (cid:2) (cid:3) (I Kt) 1ψχ = (I K) 1ψ = P on J, − − J − − and similarly for φ,Q, we deduce Q(x)P(y) P(x)Q(y) R(x,y) = − (x,y J.) x y ∈ − In particular we have q p p q j k j k R(a ,a )= − (j =k) (2.10) j k a a 6 j k − R(x,x) = Q(x)P(x) P (x)Q(x) (x J). (2.11) ′ ′ − ∈ In order to compute R(a ,a ), and also the derivatives in (1.11) and (1.12), we must find Q(x) and P (x). We j j ′ ′ begin with the obvious xQ(x) = MD(I K) 1φ(x) =(I K) 1MDφ(x)+ MD,(I K) 1 φ(x). ′ − − − − − − (cid:2) (cid:3) 7 Using (2.7), and recalling (1.6) and (2.5), we find that 1 xQ(x)=P(x)+ Q(x)u ( 1)ka R(x,a )q . (2.12) ′ k k k 4 − − X Similarly, replacing φ by ψ in this derivation gives 1 xP (x)=(I K) 1MDψ(x)+ Q(x)v ( 1)ka R(x,a )p . (2.13) ′ − k k k − 4 − − X To evaluate the first term on the right side we use the fact that φ satisfies the differential equation 1 x2φ (x)+xφ(x)+ (x α2)φ(x)=0, (2.14) ′′ ′ 4 − which may be rewritten MDψ(x)= 1(α2 x)φ. Hence 4 − α2 1 (I K) 1MDψ(x)= Q(x) (I K) 1Mφ(x) − − − 4 − 4 − α2 x 1 = Q(x) Q(x)+ M,(I K) 1 φ(x). (2.15) − 4 − 4 4 − (cid:2) (cid:3) But we find, using (2.9), that M,(I K)−1 φ(x) = Q(x)v P(x)u, − − and combining this with (2.13) and (2(cid:2).15) gives (cid:3) 1 1 1 xP′(x)= (α2 x)Q(x)+ Q(x)v P(x)u ( 1)kakR(x,ak)pk. (2.16) 4 − 2 − 4 − − X It follows from (2.11), (2.12) and (2.16) that for x J ∈ 1 1 a R(a ,a )= p2 (α2 a +2v)q2+ p q u+ a R(a ,a )(q p p q ). j j j j − 4 − j j 2 j j k j k j k− j k k=j X6 In view of (2.10) this is equation (1.15). We now derive the differential equations (1.9)–(1.14). First, we have the easy fact that ∂ . K = ( 1)kK(x,a )δ(y a ) k k ∂a − − k and so by (2.4) ∂ . (I K) 1 =( 1)kR(x,a )ρ(y,a ). (2.17) − k k ∂a − − k At this point we use the notations Q(x,a),P(x,a) for P(x) and Q(x) to remind ourselves that they are functions of a as well as x. We deduce immediately from (2.17) and (2.8) that ∂ ∂ Q(x,a)=( 1)kR(x,a )q , P(x,a)=( 1)kR(x,a )p . (2.18) k k k k ∂a − ∂a − k k Since q =Q(a ,a) and p =P(a ,a) this gives j j j j ∂q ∂p j = ( 1)kR(a ,a )q , j = ( 1)kR(a ,a )p , (j = k). j k k j k k ∂a − ∂a − 6 k k In view of (2.10) again, these are equations (1.9) and (1.10). Moreover ∂q ∂ ∂ ∂p ∂ ∂ j j = ( + )Q(x,a) , = ( + )P(x,a ) . j ∂aj ∂x ∂aj x=aj ∂aj ∂x ∂aj x=aj (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) 8 Equations (1.11) and (1.12) follow from this, (2.18), (2.12), (2.16) and (2.10). Finally, using the definition of u in (2.5), the fact ∂ χ (y) = ( 1)jδ(y a ), ∂a J − − j j and (2.17) we find that ∂u = ( 1)jφ(a )q +( 1)j(φχ ,R(,a ))q . ∂a − j j − J · j j j But (φχ ,R(,a )) = R(x,a )φ(x)dx = R(a ,x)φ(x)dx J · j j j ZJ ZJ since R(x,y)=R(y,x) for x,y J. Since R(y,x)=0 for x J the last integral equals ∈ 6∈ ∞ R(a ,x)φ(x)dx = q φ(a ). j j j − Z0 This gives (1.13), and (1.14) is completely analogous. We end this section with two relationships (analogues of (2.18) and (2.19) of [28]) which would allow us to express u and v in terms of the q and p if we wished to do so. (They will also be needed in the next section.) These are j j 1 2v+ u2+u= ( 1)ja q2, (2.19) 4 − j j j X u= ( 1)j 4p2 (α2 a +2v)q2+2p q u . (2.20) − j − − j j j j j X (cid:0) (cid:1) To obtain the first of these observe that (1.9) and (1.11) imply ∂ 1 a q = p + q u, k j j j ∂a 4 k! k X while from (1.13) and (1.14), ∂ 1 1 (2v+ u2) = 2( 1)jq (p + q u). j j j ∂a 4 − 4 j If we multiply both sides of the previous formula by ( 1)ja q and sum over j what we obtain may be written j j − ∂ ∂ 1 a ( 1)ja q2 ( 1)ka q2 = a (2v+ u2), k∂ak!  − j j− − k k k∂ak! 4 k j k k X X X X   or equivalently ∂ ∂ 1 a ( 1)ja q2 = a (2v+ u2+u). k∂ak!  − j j k∂ak! 4 k j k X X X   Itfollowsthatthetwosidesof(2.19)differbyafunctionof(a ,...,a )whichisinvariantunderscalarmultiplication. 1 2m Since, as is easily seen, both sides vanish when all a =0 their difference must vanish identically. j Todeduce(2.20)wemultiply(2.17)bya andsumoverk andthenaddtheresultto(2.7),recalling(2.6),toobtain k ∂ ∂ ∂ 1 x +y +I + a R(x,y) = Q(x)Q(y) k ∂x ∂y ∂a 4 k! k X 9 for x,y J. This gives ∈ ∂ 1 1 ∂u a a R(a ,a ) = a q2 = a ( 1)j . k∂a j j j 4 j j 4 j − ∂a (cid:18) k(cid:19) j X If we multiply both sides of this by ( 1)j and sum over j we deduce, by an argument similar to one just used, that − 1 ( 1)ja R(a ,a )= u. (2.21) j j j − 4 j X Substituting for a R(a ,a ) here the right side of (1.15) we see that the resulting double sum vanishes, and (2.20) j j j results. B. The ordinary differential equation In this section we specialize to the case J =(0,s) and derive (among other things) the differential equation (1.16) andthe representation(1.19). Inthe notationofthe lastsectionm=1,a =0,a =s. We shallwriteq(s),p(s),R(s) 1 2 for q ,p ,R(s,s), respectively. Equations (1.11)–(1.14) become 2 2 1 sq =p+ qu (2.22) ′ 4 1 1 1 sp = (α2 s)q+ qv pu (2.23) ′ 4 − 2 − 4 u =q2 (2.24) ′ v =pq. (2.25) ′ It is immediate from (2.21) and (2.24) that 1 (sR(s))′ = q(s)2, (2.26) 4 and since d logD(J;λ) = R(s) ds − (see (1.5)), we obtain the representation (1.19). To obtain the differential equation (1.16) we apply s d to both sides of (2.22) and use (2.22), (2.23) and (2.24). ds What results is 1 1 1 s(sq ) = (α2 s)q+ (u2+8v)q+ sq3. (2.27) ′ ′ 4 − 16 4 But (2.19) in this case is u2+8v = 4sq2 4u, − and so the above can be written 1 1 1 s(sq ) = (α2 s)q uq+ sq3. (2.28) ′ ′ 4 − − 4 2 Next, we square both sides of (2.22) and use (2.20), which now says u = 4p2 (α2 s+2v)q2+2pqu, − − and find that 1 1 1 (sq )2 = u+ (α2 s)q2+ q2(u2+8v). ′ 4 4 − 16 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.