ebook img

Unification of Maxwell-Boltzmann, Bose-Einstein statistics and Zipf-Mandelbort Law with Law of Large Numbers and corresponding Fluctuation theorem PDF

0.27 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Unification of Maxwell-Boltzmann, Bose-Einstein statistics and Zipf-Mandelbort Law with Law of Large Numbers and corresponding Fluctuation theorem

Unification of Maxwell-Boltzmann, Bose-Einstein statistics and Zipf-Mandelbort Law with Law of Large Numbers and corresponding Fluctuation theorem Vassili Kolokoltsov ([email protected]), Tomasz M. Lapinski ([email protected]) 5 1 0 2 University of Warwick, n Department of Statistics, a J CV4 7AL Coventry, UK, 7 2 January 28, 2015 ] R P . Abstract h t a TherigorousversionoftheresultthatunifiesMaxwell-Boltzmann, Bose-Einstein m statistics and Zipf-Mandelbrot Law is presented in the paper. Additionally, the [ Fluctuation theorem is included. Both results are set in the rigorous probabilistic 1 framework. The system under consideration is composed of fixed number of parti- v cles and energy smaller than some prescribed value. The particles are redistributed 8 on fixed number of energy levels and are indistinguishable within one level. Fur- 1 7 ther, the degenerations of energy levels are included and their number increases 6 as the number of particles increase. The three distribution mentioned in the title 0 corresponds to the three asymptotic cases of rate of increases of degenerations as . 1 a function of particles. When we take a thermodynamic limit for three cases we 0 5 basically obtain the Law of Large Numbers and the obtained three means are our 1 distributions. Thefluctuations, i.e. the deviation from the mean turnsout depends : v on the Entropy of the system. When its maximum is in inside of the domain we i X get Gaussian distribution. When it is on the boundary we get discrete distribution r in the direction orthogonal to the boundary on which maximum is situated and a Gaussian in other directions. The proof is similar for both results. The main part arethepropertiesandasymptoticbehaviorofEntropy. Duetothiswecanoptimize it and therefore apply probabilistic results which underlying assumptions overlap with ours and eventually get the desired results. Key words— Bose-Einstein, Maxwell-Boltzmann, Zipf-Mandelbrot, law of large numbers, fluctuations, Entropy 1 Introduction Maxwell-Boltzmann and Bose-Einstein statistics are one of the most known results of Statistical Mechanics. The Zipf Law in turn, widely occurs in Complexity Science. For 1 this basic results we developed an interesting rigorous results which unifies them under one framework. Additionally we provide a corresponding fluctuation theorem and de- velop our results in the formal probabilistic manner. The underlying probabilistic system is defined independent of the context of the ap- plication. However, for the illustrative purposes we consider physical context, namely, the ideal gas. We consider the system of number of particles which has an average energy of particle smaller or equal to some fixed value. It is similar to the microcanonical en- semble, but the energy is not fixed, it can have some prescribed range of values. However our results can be easily adopted to the case when value of energy is within some small bounded interval. Further, there are number of fixed energies which particles can have, i.e. energy levels and the number of these energy levels is also fixed. Each energy level has a number of level degenerations. These degenerations has an influence when counting possible number redistribution of particles over energy levels. The Entropy of the system is a product of logarithms of combinatoric formula counting the possible configuration on each energy level assuming indistinguishability of particles having the same energy. There can be three types of system. We assume that the number of degenerations depends, increases, if the number of particles increase. We consider three cases depending on how strong this increase is. It can increase in the same way, slower or faster. This idea is introduced by Maslov [2005a]. For each of theses cases we obtain different statistics, respectively Bose-Einstein, Maxwell-Boltzmann and Zipf-Mandelbrot Law. Normally the degenerations correspond to the fact that there are energy levels which differs by very small energy values and therefore they are considered as the same level. In our case there is an open question what physical interpretation can have this changing number of degenerations. The probability space which underlies the results is following. The sample space con- sists of the configuration of particles over the energy levels, divided by total number of particles. Theses configurations are constraint by fixed average energy and total number of particles. The sigma algebra is discrete and probability mass function of particular element in sample space is its entropy normalized by the partition function of the system. Our first result is weak law of large numbers. The distribution of particles on energy levels converges to mean when the number of particles tends to infinity, i.e. in thermody- namic limit. The three distribution mentioned in the title are obtained for different rate of change of the degenerations number. Additionally we provide a rate of convergence to the mean. Similar result but not mathematically rigorous was introduced in Maslov [2005a] and Maslov [2005b]. The second result gives the distribution of the fluctuations from the mean as system tends to infinite size. Since the Entropy can have a maximum on the boundary or inside of the domain, depending on relation of fixed average energy of particle to average energy of energy levels, we have two cases. The idea of two cases was first mentioned in Maslov [2005a]. For the maximum of the Entropy in the interior of the domain we have Gaussain fluctuations. When the maximum is on the boundary we have discrete distribution in direction orthogonal to the boundary on which maximum is situated and Gaussian in other directions. Here we also provide a rate of convergence to the limiting distribution. The proof for both results is similar. The essence of the proof are the properties of the Entropy and its asymptotic behaviour. Due to them we are able to optimize the Entropy and therefore distinguish the two types of its maximum and find the statistics explicitly. Similar calculations but partially rigorous and not complete were done in Maslov [2004]. The asymptotic properties of Entropy and the considered probabilistic system overlap 2 with the assumptions of the probabilistic results in Kolokoltsov and Lapinski [2015]. We apply the theorem from that paper for both types of maximum and for three cases of degenerations and obtain our limit theorems. The paper is divided into several sections and appendix. The next section, second one is an introduction of the mathematical setting. We introduce there an underlying probabilistic system, a random variable and three cases of degenerations. In the third section we introduce and proof the main results of the paper, two limit theorems. Fourth section is devoted to the properties of Entropy, put as a Lemma with formal proof. The last section consists of the optimization of Entropy with related results needed in the optimizing. In the Appendix we provide some relatively basic results needed through out the paper. 2 Mathematical setting For given integers G,N > 0, real number E > 0 and mapping ε : 1,2,...G R we introduce a probability space. The elementary events are uni- { } → formly distributed G-dimensional vectors of nonnegative integers n ,i = 1,...,G satisfy- i ing constraints: N = n +n +...+n , (1) 1 2 G EN ε(1)n +ε(2)n +...+ε(G)n . (2) 1 2 G ≥ In physics we call such system micro-canonical ensemble. Arbitrary elementary event can be illustrated as the random distribution of N balls in G boxes. Moreover, each box has ’weight’ coefficient ε(i) and the total ’weight’ must be less or equal EN. Furthermore, let us denote the image of the function ε as the set ε ,ε ,...,ε and 1 2 m { } without loss of generality it can be ordered ε < ε < ... < ε . To each element in the 1 2 m set corresponds a positive integer G , i = 1,2,...,m representing the number of points i in the domain of ε having the values ε , so that G = m G . i i=1 i We can use this setting to define probability space in an alternative way. We consider P the values G and ε ,i = 1,...,m instead of the mapping ε. Respectively, the conditions i i (1) and (2) are reformulated N = N +N +...+N , (3) 1 2 m EN ε N +ε N +...+ε N , (4) 1 1 2 2 m m ≥ where N = n +...+n +n for i = 1,...,m. This i G1+...+Gi−1+1 G1+...+Gi−1+2 G1+...+Gi−1+Gi situation, can be illustrated as distributing N balls over m ’bigger’ boxes, where to each corresponds unique value ε . Then in each i-th ’bigger’ box balls are distributed over G i i boxes. For given vectors = (N ,...,N ) and = (G ,...,G ) the number of different 1 m 1 m N G combinations which can occur in such redistribution, exactly the logarithm of that num- ber is denoted by S( ) and called Entropy. N We count those combinations using formula from Combinatorics for the possible num- ber of unordered arrangements of size r obtained by drawing from n objects, m (N +G 1)! i i S( ) = ln − . (5) N N !(G 1)! i i i=1 − Y 3 Let us consider the discrete random vector denoted by X = (X ,X ,...,X ) where N 1 2 m X = N /N, i = 1,...,m and respectively sample space given by transformed conditions i i (3) and (4) is given by 1 = x +x +...+x , 1 2 m E ε x +ε x +...+ε x , (6) 1 1 2 2 m m ≥ 1 2 N 1 x , ,..., − ,1 , i ∈ N N N (cid:26) (cid:27) and denoted by Ω and respectively entropy function N,E m (x N +G 1)! i i S(x,N) = ln − . (x N)!(G 1)! i i i=1 − Y The probability mass function (pmf) of random variable X is given by N 1 Pr(X = x) = eS(x,N), (7) N Z(N,E) where Z(N,E) is a normalization constant specified by Z(N,E) = eS(x,N), (8) ΩXN,E which is atotal number ofelementary events inthe sample spaceΩ . Sometimes Z(N,E) E is called partition function. We are interested in the behaviour of random vector X as N . Since the domain → ∞ Ω depends on N the standard pointwise convergence of the function does not apply. N,E However, we overcome that difficulty. For a fixed point x of the domain we choose a sequence of points x(N) when N . These points, for each N, are the closest points → ∞ from the domain to the fixed point x. Since, the number of points of the domain increases with N and they are evenly distributed x(N) x. This way we obtain convergence for → every fixed point of the domain and therefore have convergence analogical to pointwise convergence. We consider a particular case when G = G(N) is an increasing function of N. More- over, for each N the components G are equally weighted and their number m remains i constant. Which means that for all N, G = g G(N) for i = 1,...,m and some constants i i g such that m g = 1. i i=1 i We distinguish three cases of function G(N), depending on its asymptotic behaviour P in N → ∞ G(N) 1) , N → ∞ G(N) 2) c, (9) N → G(N) 3) 0, N → where c is some positive constant. The idea of three asymptotic cases is adopted from the paper of Maslov Maslov [2005a]. 4 3 Main results Theorem 1 (WeakLawoflargenumbers). Let X be the m-dimensional discrete random N vector on the sample space Ω with pmf specified by (7). As N the random vector N,E → ∞ X converges in distribution to the constant vector x∗ = (x∗,x∗,...,x∗ ). The exact N 1 2 m values of the components of x∗ depend on the sample space parameter E. Let gε = 1 m g ε , then m i=1 i i a) When ε < EP< gε the components of x∗ are 1 g G(N) 1) x∗ = i , if , i eλεi+ν N → ∞ g G(N) 2) x∗ = i , if c, i eλεi+ν 1 N → − g G(N) 3) x∗ = i , if 0, i λε +ν N → i for i = 1,...,m and the parameters λ and ν are the solution of the system of equations m m 1 = x∗, E = ε x∗. i i i i=1 i=1 X X b) When E gε the components of x∗ are ≥ x∗ = g , i = 1,...,m. i i Further, we have following estimates for each case of G(N) given by (9) 1 1 N 1) M (ξ) = eξTx∗ +O , when , XN N1/2−3ǫ N1/2−3ǫ ≫ G(N) (cid:18) (cid:19) N N 1 M (ξ) = eξTx∗ +O , when , XN G(N) G(N) ≫ N1/2−3ǫ (cid:18) (cid:19) 1 1 2) M (ξ) = eξTx∗ +O , when c(N), XN N1/2−3ǫ N1/2−3ǫ ≫ (cid:18) (cid:19) 1 M (ξ) = eξTx∗ +O(c(N)), when c(N) , XN ≫ N1/2−3ǫ 1 1 G(N) 3) M (ξ) = eξTx∗ +O , when , XN G(N)1/2−3ǫ G(N)1/2−3ǫ ≫ N (cid:18) (cid:19) G(N) G(N) 1 M (ξ) = eξTx∗ +O , when , XN N N ≫ G(N)1/2−3ǫ (cid:18) (cid:19) where ǫ (0,min 1/2m,1/6 ) is a constant, N , M (ξ) is a moment gener- ∈ { } → ∞ XN ating function of the random vector X . In the second case G(N) = c+ O(c(N)), c(N) N N is a positive, decreasing function and we define f(x) g(x) lim f(x)/g(x) = . x→∞ ≫ ⇐⇒ ∞ 5 Proof. Let us include the first constraint of domain Ω (6) by replacing the last compo- N,E nent of x, x = 1 m in the function S(x,N). The we can define a new m 1 random m − i=1 − vector X′ = (X ,...,X ), where its pmf is (7) with thesubstitution x = 1 m x . N 1 P m−1 m − i=1 i The corresponding mgf of X′ can be obtained from mgf of X N N P MXN(ξ) = E[eξTXN] = E[eξ′TXN′ +ξm(1−X1−...−Xm−1)] = E[e(ξ′T−ξm)XN′ +ξm] = eξmMXN′ (ξ′−ξm) (10) Since the function S(x,N) with x = 1 m has a properties given by the Lemma 1 of m − i=1 theEntropyresultsSection, thereforewecanapplyTheorem1fromKolokoltsov and Lapinski P [2015]. We do it for all the cases of the G(N) and obtain our theorem for the random vector X′ except the exact vales of maximums. The estimates for the r.v. X′ are N N following 1 1 N 1) MXN′ (ξ′ −ξm) = e(ξ′−ξm)Tx∗ +O N1/2−3ǫ , when N1/2−3ǫ ≫ G(N), (cid:18) (cid:19) N N 1 MXN′ (ξ′ −ξm) = e(ξ′−ξm)Tx∗ +O G(N) , when G(N) ≫ N1/2−3ǫ, (cid:18) (cid:19) 1 1 2) MXN′ (ξ′−ξm) = e(ξ′−ξm)Tx∗ +O N1/2−3ǫ , when N1/2−3ǫ ≫ c(N), (cid:18) (cid:19) 1 MXN′ (ξ′−ξm) = e(ξ′−ξm)Tx∗ +O(c(N)), when c(N) ≫ N1/2−3ǫ, 1 1 G(N) 3) MXN′ (ξ′ −ξm) = e(ξ′−ξm)Tx∗ +O G(N)1/2−3ǫ , when G(N)1/2−3ǫ ≫ N , (cid:18) (cid:19) G(N) G(N) 1 MXN′ (ξ′ −ξm) = e(ξ′−ξm)Tx∗ +O N , when N ≫ G(N)1/2−3ǫ, (cid:18) (cid:19) We obtain the estimate for our theorem by reversing the transformation (10) on the estimate eξme(ξ′+ξm)Tx∗ = eξ′Tx∗−ξm(1−x∗1−...−x∗m−1) = eξ′Tx∗−ξmx∗m− = eξTx∗ where x∗ is m dimensional vector which is maximum of S(x,N) in the limit as N . → ∞ The exact values of the maximum we obtain from the Lemma 2 of the Entropy results Section. Hence we obtained our theorem. Forthediscrete randomvector X onthesamplespace Ω withpmfspecified by(7) N N,E we define a m-dimensional random vector Y . We consider two cases of Y for two types N N of maximum of the function S(x,N) which occurs in the pmf. Theses cases are defined by the Lemma 2 of the Entropy results section. The type of maximum is distinguished by the value of the sample space parameter E. The random vector Y has forms N a) Y = h(N)(X x∗), when E ge N N − ≥ b) Y = Np(V v∗)+√N(Vˆ vˆ∗), when ǫ < E < ge (11) N 1 − 1 − 1 where x∗ and x∗ = Tv∗ = T(v∗,vˆ∗) is the maximum of the functions S on Ω . The trans- 1 N formationx = Tv isarotationofthecoordinatesystem suchthattheaxisv isorthogonal 1 to the hyperplane of Ω on which maximum is attained. In the new corridanate system N,E we have a transformed random vector X = TV = T(V ,Vˆ). Additionally, we assume N 1 that for the second type of maximum the hyperplane on which maximum is attained has a rational coefficient, i.e. values ǫ ,...,ǫ are rational. 1 m 6 Theorem 2 (Fluctuations theorem). As N the defined above random vector Y N → ∞ for the maximum type a) converges in distribution for all three cases of G(N) to a m 1 − dimensional random vector with normal distribution (0,D2s(x∗)−1) and m-th random N component Y is given by a combination of other random components Y = m−1Y . m m − i=1 i For the maximum of type b) Y converges, for first two cases of G(N), to a mixture of N m 2-dimensional normal distribution (0,Dˆ2s(x∗)−1) along variable Vˆ andPdiscrete dis−tribution with pmf exp{is′(x∗)} , wheNre i is an index of points in the coordinate v on P∞ exp{is′(x∗)} 1 i=1 the lattice T(Ω ) starting form the maximal point v∗ and m-th component of Y is given N 1 N throught other components Y = m−1b Y in the limit, where b are some constants. m − i=2 i i i The matrix of derivatives D2s(x∗)−1 is m 1 m 1 and Dˆ2s(x∗)−1 is m 2 m 2 P − × − − × − matrix along v , i = 2,...,m and s′(x∗) is derivative of s along v . Additionally we have i 1 the restriction, that the estimate is valid for a subsequence of integers N which elements can be divided by some integer q, where p = v∗ with v∗ = T−1x∗. q 1 Furthermore, we have estimates for type a) of maximum 1 1 N 1) MYN(ξ) = e21ξTD2s(x∗)−1ξ 1+O N1/2−3ǫ , when N1−3ǫ ≫ G(N), (cid:18) (cid:18) (cid:19)(cid:19) N3/2 N 1 MYN(ξ) = e21ξTD2s(x∗)−1ξ 1+O G(N) , when G(N) ≫ N1−3ǫ, (cid:18) (cid:18) (cid:19)(cid:19) 1 1 2) MYN(ξ) = e21ξTD2s(x∗)−1ξ 1+O N1/2−3ǫ , when N1−3ǫ ≫ c(N), (cid:18) (cid:18) (cid:19)(cid:19) 1 MYN(ξ) = e21ξTD2s(x∗)−1ξ 1+O G(N)c(N) , when c(N) ≫ N1−3ǫ, (cid:18) (cid:19) (cid:0)p (cid:1) 1 1 G(N) 3) MYN(ξ) = e12ξTD2s(x∗)−1ξ 1+O G(N)1/2−3ǫ , when G(N)1−3ǫ ≫ N , (cid:18) (cid:18) (cid:19)(cid:19) G(N)3/2 G(N) 1 MYN(ξ) = e12ξTD2s(x∗)−1ξ 1+O N , when N ≫ G(N)1−3ǫ, (cid:18) (cid:18) (cid:19)(cid:19) and for the type b) 1) MYN(ξ) = ∞i=∞1eise′i(sx′∗(x)+∗)ξ1ie12ξˆTDˆ2s(x∗)−1ξˆ 1+O N1/12−3ǫ , when N11−3ǫ ≫ GN(N), P i=1 (cid:18) (cid:18) (cid:19)(cid:19) MYN(ξ) = P∞i=∞1eise′i(sx′∗(x)+∗)ξ1ie12ξˆTDˆ2s(x∗)−1ξˆ 1+O GN(3N/2) , when G(NN) ≫ N11−3ǫ, P i=1 (cid:18) (cid:18) (cid:19)(cid:19) P 2) MYN(ξ) = ∞i=∞1eise′i(sx′∗(x)+∗)ξ1ie12ξˆTDˆ2s(x∗)−1ξˆ 1+O N1/12−3ǫ , when N11−3ǫ ≫ c(N), P i=1 (cid:18) (cid:18) (cid:19)(cid:19) MYN(ξ) = P∞i=∞1eise′i(sx′∗(x)+∗)ξ1ie12ξˆTDˆ2s(x∗)−1ξˆ 1+O G(N)c(N) , when c(N) ≫ N11−3ǫ, P i=1 (cid:18) (cid:19) (cid:0)p (cid:1) as N , whePre ǫ (0,min 1/2m,1/6 ) is some arbitrary small constant and M is → ∞ ∈ { } YN a moment generating function of random vector Y . N 7 Proof. We start we the proof of the case a). We reduce the dimension of the underlying random vector of denoted by X . By using the constraint of the sample space (6) we N reduce thedimension frommtom 1, i.e. applytherelationx = 1 x x ... x . m 1 2 m−1 − − − − − Then the first m 1 components of Y will be unchanged. The m-th component wil be m − equal m−1 m m−1 m−1 Y = √N(x x∗ ) = √N(1 x∗ ) = √N( x∗ x x∗ ) = Y m m − m − − m i − i − m − i i=1 i=1 i=1 i=1 X X X X where x∗ = 1 m−1. m − i=1 Since the function S(x,N) with x = 1 m−1x in the pmf (7) has properties given P m − i=1 i by the Lemma 1 in the Section on the Entropy results therefore the random vector P (Y ,Y ,...,Y ) with considered pmf and the sample space Ω for the first two cases 1 2 m−1 N,E of G(N) fulfils the requirements of the Theorem 2 from Kolokoltsov and Lapinski [2015]. We can apply that theorem and obtain results stated by this theorem. For the case b) we againreduce dimension of the underlying random vector X . Since N the Y can alternatively can be represented as N Y = NT−1(x x∗)+√NTˆ−1(x x∗) N 1 − − where T−1x = v and the T−1 is 1-st row and Tˆ−1 is composed of rows 2,3,...,m of the 1 matrix T−1. If we set x = 1 m−1x then we can introduce altered transformation m − i=1 i T′ such that the random vector (Y ,Y ,...,Y ) = Y′ can be represented P1 2 m−1 N Y′ = N(v′ v′∗)+√N(vˆ′ vˆ′∗) = NT′−1(x′ x′∗)+√NTˆ′−1(x′ x′∗) N 1 − − 1 − − where x′ = (x ,...,x ) and prime generally denotes reduction in dimension. The we 1 m−1 have for each Y i Y = NT′−1(x′ x′∗), 1 1 − Y = √NT′−1(x′ x′∗) j j − where j = 2,...,m 1. − The component Y we can represent as m m m−1 Y = √NT−1(x x∗) = √N t−1(x x∗) = √N t−1(x x∗)+t−1 (x x∗ ) = m m − m,i i − i m,i i − i m,m m − m (cid:18) i=1 (cid:19) (cid:18) i=1 (cid:19) X X m−1 m−1 = √N t−1(x x∗)+t−1 1 x x∗ = m,i i − i m,m − i − m (cid:18) i=1 (cid:18) i=1 (cid:19)(cid:19) X X m−1 m−1 m−1 = √N t−1(x x∗) t−1 (x x∗) = √N (t−1 t−1 )(x x∗) = m,i i − i − m,m i − i m,i − m,m i − i (cid:18) i=1 i=1 (cid:19) i=1 X X X = √NaT(x′ x′∗) − where a, is some constant vector andt−1 arecomponents of T−1. The resulting expression j,i however can be represented as following linear combination m−1 b 1 Y = Y + b Y , m 1 i i √N i=2 X 8 whereb ,b ,...,b aresomeconstants. Inthelimit,asN thefirstcomponenttends 1 2 m → ∞ to 0. Since the function S(x,N) with x = 1 m−1x in the pmf (7) has properties m − i=1 i given by the Lemma 1 in the Section on the Entropy Properties results therefore the P random vector Y′ with considered pmf and the sample space Ω for the three cases N N,E of G(N) fulfils the requirements of the Theorem 2 from Kolokoltsov and Lapinski [2015]. We can apply that theorem and obtain results stated by this theorem. 4 Entropy properties Lemma 1. The Entropy S(x,N) as N for each of the case of G(N) given by (??), → ∞ and for all x Ω has following properties E ∈ ∂2 ∂2 S(x,N) < 0, S(x,N) = 0, when i = j, for all N (12) ∂x2 ∂x ∂x 6 i i j ∂2 lim S(x,N) < 0. (13) N→∞∂x2 i Additionally, for the first two cases of G(N) and third respectively, we have 1),2) DS(x,N) =N Ds (x)+σ(x)ǫ(N) , l = 1,2, (14) l 3) DS(x,N) =G(N) Ds (x)+σ(x)ǫ(N) , (cid:2) 3 (cid:3) as N , where D is differential operat(cid:2)or, σ(x) some twice(cid:3)differentaible function of → ∞ x and m g i 1) s (x) = x ln +x , (15) 1 i i x i=1 (cid:20) i (cid:21) X m 2) s (x) = (x +g c)ln(x +g c) x lnx , (16) 2 i i i i i i − i=1 (cid:20) (cid:21) X m 3) s (x) = g lnx +g . (17) 3 i i i i=1 (cid:20) (cid:21) X When x = 0 for some i-th component of x then corresponding summmad in the formula i for s ,l = 1,2,3 is equal to 0. The error term, ǫ(N) respectively for each G(N) is defined l 1 1 N 1) ǫ(N) = O , when , N N ≫ G(N) (cid:18) (cid:19) N 1 N ǫ(N) = O , when , G(N) N ≪ G(N) (cid:18) (cid:19) 1 1 2) ǫ(N) = O , when c(N), N N ≫ (cid:18) (cid:19) 1 ǫ(N) = O(c(N)), when c(N), (18) N ≪ 1 1 G(N) 3) ǫ(N) = O , when , G(N) G(N) ≫ N (cid:18) (cid:19) G(N) 1 G(N) ǫ(N) = O , when , N G(N) ≪ N (cid:18) (cid:19) 9 where in the second case G(N) = c+O(c(N)), c(N) is a positive, decreasing function and N we define f(x) g(x) lim f(x)/g(x) = . x→∞ ≫ ⇐⇒ ∞ The properties of S(x,N) remain the same if we consider it as m 1 dimensional function − with m-th component of x defined as x = 1 m−1x . m − i=1 i Proof. Since Γ(N) = (N 1)! we can write P − (x N +g G(N) 1)! Γ(x N +g G(N)) i i i i − = , (x N)!(g G(N) 1)! Γ(x N +1)Γ(g G(N)) i i i i − for i = 1,...,m. Further, let us introduce 1 Φ (N) = x N +g G(N)+ x N +g G(N) ln x N +g G(N) , i i i i i i i − − 2 (cid:18) (cid:19) 1 (cid:0) (cid:1) Ψ (N) = x N 1+ x N + ln x N +1 , (19) i i i i − − 2 (cid:18) (cid:19) 1 (cid:0) (cid:1) Θ (N) = g G(N)+ g G(N) lng G(N). i i i i − − 2 (cid:18) (cid:19) First order approximation of gamma function by the Theorem 1 in the Appendix A, has asymptotic expansion 1 1 Γ(λ) √2πe−λ+(λ+12)lnλ 1+ + +... ,λ . ∼ 12λ 288λ2 → ∞ (cid:20) (cid:21) Forx > 0expressionsx N+g G(N),x N+1,g G(N)arepositiveandincreasingfunctions i i i i i of N, the following Gamma functions can be approximated using the above asymptotic expansion 1 1 Γ(x N +g G(N)) √2πeΨi(N) 1+ + +... , x N +g G(N) i i ∼ 12(x N +g G(N)) 288(x N +g G(N))2 i i (cid:20) i i i i (cid:21) 1 1 Γ(x N +1) √2πeΘi(N) 1+ + +... ,x N +1 i ∼ 12(x N +1) 288(x N +1)2 i → ∞ (cid:20) i i (cid:21) 1 1 Γ(g G(N)) √2πeΦi(N) 1+ + +... ,g G(N) , i ∼ 12g G(N) 288(g G(N))2 i → ∞ (cid:20) i i (cid:21) where Φ (N),Ψ (N),Θ (N) are given by (??). i i i From the definition of asymptotic expansion we obtain first order approximation, valid for x > 0 and corresponding large N i 1 Γ(x N +g G(N)) √2πeΨi(N) K √2πeΨi(N) , i i i,Ψ − ≤ 12(x N +g G(N)) (cid:12) (cid:12) (cid:12) i i (cid:12) (cid:12) (cid:12) (cid:12)1 (cid:12) (cid:12)Γ(x N +1) √2πeΦi(N) K(cid:12) (cid:12) √2πeΦi(N) , (cid:12) (cid:12) i i(cid:12),Φ (cid:12) (cid:12) − ≤ 12(x N +1) (cid:12) (cid:12) (cid:12) i (cid:12) (cid:12) (cid:12) (cid:12) 1 (cid:12) (cid:12)Γ(g G(N)) √2πeΘi(N) (cid:12) K (cid:12) √2πeΘi(N) , (cid:12) (cid:12) i (cid:12) i,Θ(cid:12) (cid:12) − ≤ 12g G(N) (cid:12) (cid:12) (cid:12) i (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) where K ,K ,K are some positive constants. Note that we can increase these con- i(cid:12),Ψ i,Φ i,Θ (cid:12) (cid:12) (cid:12) stants that above inequalities are also valid for x = 0 and all N. Hence we can represent i 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.