ebook img

Asymptotic enumeration of dense 0-1 matrices with specified line sums PDF

0.36 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Asymptotic enumeration of dense 0-1 matrices with specified line sums

Asymptotic enumeration of dense 0-1 matrices with specified line sums 7 0 0 E. Rodney Canfield Catherine Greenhill 2 ∗ † n Department of Computer Science School of Mathematics and Statistics a J University of Georgia University of New South Wales 1 Athens, GA 30602, USA Sydney, Australia 2052 2 [email protected] [email protected] ] O Brendan D. McKay C ‡ . h Department of Computer Science t a Australian National University m Canberra ACT 0200, Australia [ [email protected] 2 v 6 9 4 6 Abstract 0 6 Lets= (s ,s ,...,s )andt = (t ,t ,...,t )bevectorsofnon-negativeintegers 0 1 2 m 1 2 n / with m s = n t . Let B(s,t) be the number of m n matrices over 0,1 h i=1 i j=1 j × { } t with j-th row sum equal to s for 1 j m and k-th column sum equal to t for a P P j ≤ ≤ k m 1 k n. Equivalently, B(s,t) is the number of bipartite graphs with m vertices ≤ ≤ : in one part with degrees given by s, and n vertices in the other part with degrees v i given by t. Most research on the asymptotics of B(s,t) has focused on the sparse X case, where the best result is that of Greenhill, McKay and Wang (2006). In the r a case of dense matrices, the only precise result is for the case of equal row sums and equal column sums (Canfield and McKay, 2005). This paper extends the analytic methods used by the latter paper to the case where the row and column sums can vary within certain limits. Interestingly, the result can be expressed by the same formula which holds in the sparse case. ∗Research supported by the NSA Mathematical Sciences Program. †Research supported by the UNSW Faculty Research Grants Scheme. ‡Research supported by the Australian Research Council. 1 1 Introduction Let s = (s ,s ,...,s ) and t = (t ,t ,...,t ) be vectors of positive integers with 1 2 m 1 2 n m s = n t . Let B(s,t) be the number of m n matrices over 0,1 with j- i=1 i j=1 j × { } th row sum equal to s for 1 j m and k-th column sum equal to t for 1 k n. P P j ≤ ≤ k ≤ ≤ Equivalently, B(s,t) is the number of labelled bipartite graphs with m vertices in one part of the bipartition with degrees given by s, and n vertices in the other part of the bipartition with degrees given by t. Let s be the average value of s ,s ,...,s and let t 1 2 m be the average value of t ,t ,...,t . Define the density λ = s/n = t/m, which is the 1 2 n fraction of entries in the matrix which equal 1. The asymptotic value of B(s,t) has been much studied, especially since the celebrated Gale-Ryser Theorem [10] that characterises (s,t) such that B(s,t) > 0. Various authors have considered the semiregular case, where s = s for 1 j m and t = t for j k ≤ ≤ 1 k n. Write B(m,s;n,t) for B(s,t) in this case. For the sparse (low-λ) semiregular ≤ ≤ case, the best result is by McKay and Wang [7] who gave an asymptotic expression for B(m,s;n,t) which holds when st = o (mn)1/2 . In the dense (λ not close to 0 or 1) semiregular case, Canfield and McKay [1] used analytic methods to obtain an asymptotic (cid:0) (cid:1) expression for B(m,s;n,t) in two ranges: in the first, the matrix is relatively square and the density is not too close to 0 or 1, while in the second, the matrix is much wider than high (or vice-versa) but the density is arbitrary. For the sparse irregular case, the best result is that of Greenhill, McKay and Wang [2], who gave an asymptotic expression for B(s,t) which holds when max s max t = o (λmn)2/3 . j k { } { } See [1], [2] and [7] for a more extensive histo(cid:0)rical surve(cid:1)y. The contribution of this paper is to adapt the approach of [1] to the dense irregular case when the matrix is relatively square and the density is not too close to 0 or 1. See McKay and Wormald [8] for the corresponding calculation for symmetric matrices. In keeping with these earlier papers, the asymptotic value of B(s,t) can be expressed by a very nice formula involving binomial coefficients. We now state our theorem. Theorem 1. Let s = s(m,n) = (s ,s ,...,s ) and t = t(m,n) = (t ,t ,...,t ) be vec- 1 2 m 1 2 n tors of positive integers such that m s = n t forall m,n. Define s = m−1 m s , j=1 j k=1 k j=1 j t = n−1 n t , λ = s/n = t/m and A = 1λ(1 λ). For some ε > 0, suppose that k=1 k P P2 − P s s = O(n1/2+ε) uniformly for 1 j m, and t t = O(m1/2+ε) uniformly for | j − | P ≤ ≤ | k − | 1 k n. Define R = m (s s)2 and C = n (t t)2. Let a,b > 0 be constants ≤ ≤ j=1 j − k=1 k − such that a+b < 1. Suppose that m, n with n = o(A2m1+ε), m = o(A2n1+ε) and 2 P → ∞ P (1 2λ)2 5m 5n − 1+ + alogn. 8A 6n 6m ≤ (cid:18) (cid:19) 2 Then, provided ε > 0 is small enough, we have −1 m n mn n m 1 R C B(s,t) = exp 1 1 +O(n−b) . λmn s t −2 − 2Amn − 2Amn (cid:18) (cid:19) Yj=1(cid:18) j(cid:19)kY=1(cid:18) k(cid:19) (cid:18) (cid:16) (cid:17)(cid:16) (cid:17) (cid:19) Proof. The proof of this theorem is the topic of the paper; here we will summarize the main phases and draw their conclusions together. The basic idea is to identify B(s,t) as a coefficient in a multivariable generating function and to extract that coefficient using the saddle-point method. In Section 2, equation (1), we write B(s,t) = P(s,t)I(s,t), where P(s,t) is a rational expression and I(s,t) is an integral in m + n complex dimensions. Both depend on the location of the saddle point, which is the solution of some nonlinear equations. Those equations are solved in Section 3, and this leads to the value of P(s,t) in (20). In Section 4, the integral I(s,t) is estimated in a small region ′ defined in (33). R The result is given by Theorem 2 together with (24). Finally, in Section 5, it is shown that the integral I(s,t) restricted to the exterior of ′ is negligible. The present theorem R thus follows from (1), (20), Theorems 2–3 and (24). Note that the error term in the above slightly improves the error term for the semireg- ular case proved in [1]. Thereom 1 has an instructive interpretation. Write it as B(s,t) = NP P E, where 1 2 m n mn n m N = , P = N−1 , P = N−1 , λmn 1 s 2 t (cid:18) (cid:19) j=1(cid:18) j(cid:19) k=1(cid:18) k(cid:19) Y Y 1 R C E = exp 1 1 +O(n−b) . −2 − 2Amn − 2Amn (cid:18) (cid:19) (cid:16) (cid:17)(cid:16) (cid:17) Clearly, N is the number of m n binary matrices with λmn ones. P is the probability 1 × that a matrix randomly chosen from this class has row sums s, while P is the probability 2 of the similar event of having column sums t. If these two events were independent, we would have E = 1, so E can be taken as a measure of their non-independence. For the case when s and t are vectors of constants, that is, R = C = 0, Ordentlich and Roth [9] proved that E 1. ≤ It is proved in [2] that the same formula for B(s,t) modulo the error term also holds in the sparse case. Specifically, it holds with a different vanishing error term whenever max s max t = o (λmn)2/3 , R+C = O (λmn)4/3 and RC = O (λmn)7/3 . In [1], j k { } { } evidence is presented that the formula is universal in the semiregular case (R = C = 0) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) and it is tempting to conjecture that the same is true in the irregular case for a wide range of R,C values. We will use a shorthand notation for summation over doubly subscripted variables. If 3 x is a variable for 1 j m and 1 k n, then jk ≤ ≤ ≤ ≤ n m m n x = x , x = x , x = x , j• jk •k jk •• jk k=1 j=1 j=1 k=1 X X XX n−1 m−1 m−1n−1 x = x , x = x , x = x , j∗ jk ∗k jk ∗∗ jk k=1 j=1 j=1 k=1 X X XX for 1 j m and 1 k n. ≤ ≤ ≤ ≤ Throughout the paper, the asymptotic notation O(f(m,n)) refers to the passage of m and n to . We also use a modified notation O(f(m,n)), which is to be taken as a ∞ shorthand for O f(m,n)nO(1)ε . In this case it is important that the O(1) factor is uniform over ε provided ε is small enough; for exeample we cannot write f(m,n)n(ε−1)ε (cid:0) (cid:1) as O(f(m,n)) even though ε−1 = O(1) (ε being defined as a constant). Under the assumptions of Theorem 1, we have m = O(n) and n = O(m). We also have that 8 eA−1 O(logn), so A−1 = O(1). More generally, Ac1mc2+c3εnc4+c5ε = O(nc2+c4) if ≤ ≤ c ,c ,c ,c ,c are constants. e e 1 2 3 4 5 e e 2 Expressing the desired quantity as an integral In this section we express B(s,t) as a contour integral in (m+ n)-dimensional complex space, then begin to estimate its value using the saddle-point method. Firstly, notice that B(s,t) is the coefficient of xs1 xsm yt1 ytn in the function 1 ··· m 1 ··· n m n (1+x y ). j k j=1k=1 YY By Cauchy’s coefficient theorem this equals 1 m n (1+x y ) B(s,t) = j=1 k=1 j k dx dx dy dy , (2πi)m+n ··· xs1+1 xsm+1yt1+1 ytn+1 1··· m 1··· n I I 1 Q··· Qm 1 ··· n where each integral is along a simple closed contour enclosing the origin anticlockwise. It will suffice to take each contour to be a circle; specifically, we will write x = q eiθj and y = r eiφk j j k k for 1 j m and 1 k n. Also define ≤ ≤ ≤ ≤ q r j k λ = jk 1+q r j k 4 for 1 j m and 1 k n. Then 1+x y = (1+q r ) 1+λ (ei(θj+φk) 1) , so j k j k jk ≤ ≤ ≤ ≤ − m n (1+q r ) (cid:0) (cid:1) B(s,t) = j=1 k=1 j k (2π)m+n m qsj n rtk Q Qj=1 j k=1 k (1) πQ π Qmj=1 nk=1(1+λjk(ei(θj+φk) −1)) dθdφ, × ··· exp(i m s θ +i n t φ ) Z−π Z−π Q Q j=1 j j k=1 k k where θ = (θ ,...,θ ) and φ = (φ ,...,φ ). PWrite B(s,t)P= P(s,t)I(s,t) where P(s,t) 1 m 1 n denotes the factor in front of the integral in (1) and I(s,t) denotes the integral. We will choose the radii q , r so that there is no linear term in the logarithm of the integrand of j k I(s,t) when expanded for small θ,φ. This gives the equation m n m n λ (θ +φ ) s θ t φ = 0. jk j k j j k k − − j=1 k=1 j=1 k=1 XX X X For this to hold for all θ,φ, we require λ = s (1 j m), j• j ≤ ≤ (2) λ = t (1 k n). •k k ≤ ≤ In Section 3 we show that (2) has a solution, and determine to sufficient accuracy the various functions of the radii, such as P(s,t), that we require. In Section 4 we evaluate the integral I(s,t) within a certain region defined in (22). Section 5 contains the proof R that the integral is concentrated within the region . R 3 Locating the saddle-point In this section we solve (2) and derive some of the consequences of the solution. As with the whole paper, we work under the assumptions of Theorem 1. Change variables to a m , b n as follows: j j=1 k k=1 { } { } 1+a 1+b q = r j , r = r k , (3) j 1 r2a k 1 r2b j k − − where λ r = . 1 λ r − Equation (2) is slightly underdetermined, which we will exploit to impose an addi- tional condition. If q , r satisfy (2) and c > 0 is a constant, then cq , r /c also j k j k { } { } { } { } satisfy (2). From this we can see that, if there is a solution to (2) at all, there is one 5 for which m a < 0 and n b > 0, and also a solution for which m a > 0 and j=1 j k=1 k j=1 j n b < 0. It follows from the Intermediate Value Theorem that there is a solution for k=1 k P P P which P m n n a = m b , (4) j k j=1 k=1 X X so we will seek a common solution to (2) and (4). From (3) we find that λ /λ = 1+a +b +Z , (5) jk j k jk where a b (1 r2 r2a r2b ) j k j k Z = − − − , (6) jk 1+r2a b j k and that equations (2) can be rewritten as n s s 1 Z j j• a = − b (1 j m), j λn − n k − n ≤ ≤ k=1 X (7) m t t 1 Z b = k − a •k (1 k n). k λm − m j − m ≤ ≤ j=1 X Summing (7) over j,k, we find in both cases that m n n a +m b = Z . (8) j k •• − j=1 k=1 X X Equations (4) and (8) together imply that m n 1 n a = m b = Z . j k −2 •• j=1 k=1 X X Substituting back into (7), we obtain a = A (a ,...,a ,b ,...,b ), j j 1 m 1 n (9) b = B (a ,...,a ,b ,...,b ), k k 1 m 1 n for 1 j m, 1 k n, where ≤ ≤ ≤ ≤ s s Z Z A (a ,...,a ,b ,...,b ) = j − j• + •• , j 1 m 1 n λn − n 2mn t t Z Z B (a ,...,a ,b ,...,b ) = k − •k + •• . k 1 m 1 n λm − m 2mn 6 (0) (0) Equation (9) suggests an iteration. Start with a = b = 0 for all j,k, and, for each j k ℓ 0, define ≥ a(ℓ+1) = A (a(ℓ),...,a(ℓ),b(ℓ),...,b(ℓ)), j j 1 m 1 n (10) b(ℓ+1) = B (a(ℓ),...,a(ℓ),b(ℓ),...,b(ℓ)), k k 1 m 1 n where Z = Z(ℓ) = Z (a(ℓ),...,a(ℓ),b(ℓ),...,b(ℓ)) and similarly for Z = Z(ℓ) and Z = j• j• j• 1 m 1 n •k •k •• Z(ℓ). We will show that this iteration converges to a solution of (9) using a standard •• contraction-mapping argument. Recall that A−1 = O(logn) under the assumptions of Theorem 1 (which we are adopt- ing throughout). This implies that r2 = O(logn). Therefore, within the region defined A by a , b n−1/3 for all j,k, we have that j k | | | | ≤ ∂Z ∂Z j,k = o(m−1/4) and j,k = o(n−1/4), ∂a ∂b j k which imply that, in the same region, we have ∂A o(m−1/4) (j′ = j) ∂A j = j = o(n−5/4), ∂aj′ (o(m−5/4) (j′ = j), ∂bk 6 ∂B ∂B o(n−1/4) (k′ = k) k = o(m−5/4), k = ∂aj ∂bk′ (o(n−5/4) (k′ = k). 6 Therefore, by the mean value theorem, we have for ℓ 1 that ≥ max a(ℓ+1) a(ℓ) +max b(ℓ+1) b(ℓ) = o(m−1/4)max a(ℓ) a(ℓ−1) j | j − j | k | k − k | j | j − j | +o(n−1/4)max b(ℓ) b(ℓ−1) , k | k − k | (ℓ−1) (ℓ−1) (ℓ) (ℓ) provided a b a b . { j }∪{ k }∪{ j }∪{ k } ⊆ A Applying the iteration once, we have (1) (1) a = (s s)/(λn) and b = (t t)/(λm). j j − k k − Since a(0) , b(0) and a(1) , b(1) lie inside 1 , we find by induction that a(ℓ) , b(ℓ) { j } { k } { j } { k } 2A { j } { k } lie in ℓ for all ℓ. Moreover, the iteration is Cauchy-convergent in the maximum norm, ℓ+1A (ℓ) (ℓ) (ℓ) (ℓ−1) (ℓ) (ℓ−1) and the error in stopping at a , b is at most max a a +max b b . { j } { k } j| j − j | k| k − k | (l) (l) When we carry out this iteration, we find that all the encountered a and b values j k are O(n−1/2). It helps to know that the following approximation holds in that case: e Zjk = (1 r2)ajbk r2ajb2k r2a2jbk r2(1 r2)a2jb2k +O(n−5/2). − − − − − 7 e Using the fact that m a(1) = 0 and n b(1) = 0, we find that j=1 j k=1 k P Pn Z(1) = r2a(1) b(1) 2 +O(n−1), j• − j k k=1 X(cid:0) (cid:1) m e Z(1) = r2b(1) a(1) 2 +O(n−1), •k − k j j=1 X(cid:0) (cid:1) Z(1) = O(1). e •• Therefore, e s s (s s)C a(2) = j − + j − +O(n−2) (1 j m), j λn λ2(1 λ)m2n2 ≤ ≤ − t t (t t)R b(2) = k − + k − +Oe(n−2) (1 k n). k λm λ2(1 λ)m2n2 ≤ ≤ − e Similarly, n n Z(2) = r2a(2) b(2) 2 r2(1 r2) a(2) 2 b(2) 2 +O(n−3/2), j• − j k − − j k k=1 k=1 X(cid:0) (cid:1) (cid:0) (cid:1) X(cid:0) (cid:1) m m e Z(2) = r2b(2) a(2) 2 r2(1 r2) b(2) 2 a(2) 2 +O(n−3/2), •k − k j − − k j j=1 j=1 X(cid:0) (cid:1) (cid:0) (cid:1) X(cid:0) (cid:1) m n e Z(2) = r2(1 r2) a(2) 2 b(2) 2 +O(n−1/2), •• − − j k j=1 k=1 X(cid:0) (cid:1) X(cid:0) (cid:1) e which gives s s (s s)C (1 2λ)(s s)2C (3) j j j a = − + − + − − j λn λ2(1 λ)m2n2 λ3(1 λ)2m2n3 − − (1 2λ)RC − +O(n−5/2) (1 j m), − 2λ3(1 λ)2m3n3 ≤ ≤ − (11) b(3) = tk −t + (tk −t)R e + (1−2λ)(tk −t)2R k λm λ2(1 λ)m2n2 λ3(1 λ)2m3n2 − − (1 2λ)RC − +O(n−5/2) (1 k n). − 2λ3(1 λ)2m3n3 ≤ ≤ − e Furtheriterationsmakenochangetothisaccuracy, sowehavethata = a(3)+O(n−5/2) j j e 8 and b = b(3) +O(n−5/2). We also have that k k (1 2λ)(s s)(t t) (s s)(t t)2 (s s)2(t t) e j k j k j k Z = − − − − − − − jk λ2(1 λ)mn − λ2(1 λ)m2n − λ2(1 λ)mn2 − − − (1 2λ)(s s)2(t t)2 (1 2λ)(s s)(t t)R − j − k − + − j − k − (12) − λ3(1 λ)2m2n2 λ3(1 λ)2m2n3 − − (1 2λ)(s s)(t t)C + − j − k − +O(n−5/2). λ3(1 λ)2m3n2 − e A sufficient approximation of λ is given by substituting (11) and (12) into (5). jk In evaluating the integral I(s,t), the following approximations will be required: (1 2λ)(s s) (1 2λ)(t t) (s s)2 λ (1 λ ) = λ(1 λ)+ − j − + − k − j − jk − jk − n m − n2 (13) (t t)2 (1 6λ+6λ2)(s s)(t t) k − + − j − k − +O(n−3/2), − m2 λ(1 λ)mn − (1 6λ+6λe2)(s s) j λ (1 λ )(1 2λ ) = λ(1 λ)(1 2λ)+ − − jk − jk − jk − − n (14) (1 6λ+6λ2)(t t) + − k − +O(n−1), m λjk(1 λjk)(1 6λjk +6λ2jk) = λ(1 λ)(1 6λ+6λ2)e+O(n−1/2). (15) − − − − e 3.1 Estimating the factor P(s,t) Let m n Λ = λλjk(1 λ )1−λjk. jk − jk j=1k=1 YY Then m n 1+q r λjk Λ−1 = j k (1+q r )1−λjk q r j k j=1k=1(cid:18) j k (cid:19) YY m n m n −1 = (1+q r ) qλj• rλ•k j k j k j=1k=1 (cid:18)j=1 k=1 (cid:19) YY Y Y m n m n = (1+q r ) q−sj r−tk j k j k j=1k=1 j=1 k=1 YY Y Y using (2). Therefore the factor P(s,t) in front of the integral in (1) is given by P(s,t) = (2π)−(m+n)Λ−1. 9 We proceed to estimate Λ. Writing λ = λ(1+x ), we have jk jk λλjk(1 λ )1−λjk λ log jk − jk = λx log λλ(1 λ)1−λ jk 1 λ (cid:18) − (cid:19) (cid:18) − (cid:19) (16) λ λ(1 2λ) λ(1 3λ+3λ2) x5 + x2 − x3 + − x4 +O jk . 2(1 λ) jk − 6(1 λ)2 jk 12(1 λ)3 jk (1 λ)4 − − − (cid:18) − (cid:19) We know from(2) thatλ = mnλ, which implies that x = 0, hence the first term on the •• •• right side of (16) does not contribute to Λ. Now using (5) we can write x = a +b +Z jk j k jk and apply the estimates in (11) and (12) to obtain R C RC (1 2λ)R (1 2λ)C Λ = λλ(1 λ)1−λ mnexp + + − 3 − 3 − 4An 4Am 8A2m2n2 − 24A2n2 − 24A2m2 (cid:18) (17) (cid:0) (cid:1) (1 3λ+3λ2)R (1 3λ+3λ2)C + − 4 + − 4 +O(n−1/2) , 96A3n3 96A3m3 (cid:19) e where R = m (s s)ℓ and C = n (t t )ℓ for any ℓ. Note that R = R and ℓ j=1 j − ℓ k=1 − k 2 C = C. 2 P P To match the formula from the sparse case solved in [2], we will write (17) in terms of binomial coefficients. First, by Stirling’s expansion of the logarithm of the gamma function, we have that N (xx+d(1 x)1−x−d)−N = − (x+d)N 2√πXN (cid:18) (cid:19) 1 2X d2N (1 2x)d (1 4X)d2 exp − − + − (18) × − 24XN − 4X − 4X 16X2 (cid:18) (1 2x)d3N (1 6X)d4N d5N d 1 + − − +O + + 24X2 − 96X3 X4 X2N X3N3 (cid:19) (cid:16) (cid:17) as N , provided x = x(N), X = X(N) = 1x(1 x) and d = d(N) are such that → ∞ 2 − 0 < x < 1, 0 < x+ d < 1 and provided that the error term in the above is o(1). From this we infer that mn −1 m n n m (λλ(1 λ)1−λ)−mn = − λmn s t (4πA)(m+n−1)/2m(n−1)/2n(m−1)/2 (cid:18) (cid:19) j=1(cid:18) j(cid:19)k=1(cid:18) k(cid:19) Y Y R C 1 2A m n 1 4A R C (19) exp − + + − + × −4An − 4Am − 24A n m 16A2 n2 m2 (cid:18) (cid:16) (cid:17) (cid:16) (cid:17) 1 2λ R C 1 6A R C + − 3 + 3 − 4 + 4 +O(n−1/2) . 24A2 n2 m2 − 96A3 n3 m3 (cid:19) (cid:16) (cid:17) (cid:16) (cid:17) e 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.