ebook img

Modularity spectra, eigen-subspaces, and structure of weighted graphs PDF

0.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Modularity spectra, eigen-subspaces, and structure of weighted graphs

Modularity spectra, eigen-subspaces, and structure of weighted graphs Marianna Bolla ∗ 3 1 Institute of Mathematics, Budapest University of Technology and Economics and 0 2 Inter-University Centre for Telecommunications and Informatics, Debrecen n a J 2 Abstract 2 ] The role of the normalized modularity matrix in finding homogeneous cuts will be T presented. We also discuss the testability of the structural eigenvalues and that S of the subspace spanned by the corresponding eigenvectors of this matrix. In the . h presence of a spectral gap between the k 1 largest absolute value eigenvalues and t − a the remainder of the spectrum, this in turn implies the testability of the sum of m the inner variances of the k clusters that are obtained by applying the k-means [ algorithm for the appropriately chosen vertex representatives. 1 v 4 Key words: Normalized modularity, Volume regularity, Spectral clustering, 5 Testable weighted graph parameters 2 5 . 1 0 3 1 : v 1 Introduction i X r a The purpose of this paper is to summarize the spectral properties and testa- bility of the spectrum and spectral subspaces of the normalized modularity matrix introduced in [9] to find regular vertex partitions. We will generalize the Laplacian based spectral clustering methods to recover so-called volume regular cluster pairs such that the information flow between the pairs and within the clusters is as homogeneous as possible. For this purpose, we take into consideration both ends of the normalized Laplacian spectrum, i.e., large Research supported in part by the Hungarian National Research Grants OTKA ∗ 76481andOTKA-KTIA77778; further,bytheTA´MOP-4.2.2.C-11/1/KONV-2012- 0001project.LatterprojecthasbeensupportedbytheEuropeanUnion,co-financed by the European Social Fund. Email address: [email protected] (Marianna Bolla). Preprint submitted to Elsevier 23 January 2013 absolute value, so-called structural eigenvalues of our normalized modularity matrix introduced just for this convenience. In Theorem 3, we estimate the constant of volume regularity in terms of the gapbetween thestructuralandothereigenvalues, andthek-varianceoftheop- timal vertex representatives constructed by the eigenvectors corresponding to thestructural eigenvalues. Herewegiveamoredetailedproofofthisstatement than in [10]. This theorem implies that for a general edge-weighted graph, the existence of k 1 structural eigenvalues of the normalized modularity matrix, − separated from 0, is indication of a k-cluster structure such that the cluster- pairs are volume regular with constant depending on the spectral gap and the above k-variance. The clusters themselves can be recovered by applying the k-means algorithm for the vertex representatives. Hence, Theorem 3 implies that spectral clustering of the vertices into k parts gives satisfactory partition in the sense of volume regularity. Furthermore, in Theorems 8 and 10, we prove the testability of the structural eigenvalues and the corresponding eigen-subspace of the normalized modular- ity matrix in the sense of [12]. In view of this, spectral clustering methods can be performed on a smaller part of the underlying graph and give good approximation for the cluster structure. 2 Preliminaries Throughout the paper, we use the general framework of an edge-weighted graph. Let G = G = (V,W) be an edge-weighted graph on vertex-set V n ( V = n) and n n symmetric weight-matrix W of non-negative real entries | | × and zero diagonal. We will call the numbers d = n w (i = 1,...,n) i j=1 ij generalized degrees, and the diagonal matrix D = diag(d ,...,d ) degree P 1 n matrix. In this and the next section, without loss of generality, Vol(V) = 1 will be assumed, where the volume of the vertex-subset U V is Vol(U) = ⊆ d . In the sequel, we only consider connected graphs, which means that i U i W∈is irreducible. P In [9], we defined the normalized version of the modularity matrix (introduced T in [21]) as M = D 1/2WD 1/2 √d√d , where √d = (√d ,...,√d )T, D − − 1 n − and we called it normalized modularity matrix. The spectrum of this matrix is in the [-1,1] interval, and 0 is always an eigenvalue with unit-norm eigenvector √d.Indeed,in[5]weprovedthat1isasingleeigenvalueofD 1/2WD 1/2 with − − corresponding unit-norm eigenvector √d, provided our graph is connected. This becomes a zero eigenvalue of M with the same eigenvector, whence 1 D cannot be an eigenvalue of M if G is connected. In fact, the introduction D of this matrix is rather technical, the spectral gap, further, Lemma 1 and 2 Theorem 3 can better be formulated with it. It can also be obtained from the normalized Laplacian by subtracting it from the identity and depriving of its trivial factor. Normalized Laplacian was used for spectral clustering in several papers (e.g., [3,5,6,14,20]), the idea of which can be summarized by means of thespectraldecompositionofthenormalizedmodularitymatrix.Weintroduce the following notation: the weighted cut between the vertex-subsets X,Y V ⊆ is w(X,Y) = w . We will frequently refer to the following facts. i X j Y ij ∈ ∈ P P (a) The spectral decomposition of M solves the following quadratic placement D problem. For a given positive integer k (1 < k < n), we want to minimize Q = w r r 2 on the conditions k i<j ijk i − jk P n n d r rT = I and d r = 0 (1) i i i k 1 i i − i=1 i=1 X X where the vectors r ,...,r are (k 1)-dimensional representatives of the 1 n − vertices, which form the row vectors of the n (k 1) matrix X. Denote the × − eigenvaluesofM ,indecreasingorder,by1 > λ λ 1withcor- D 1 n ≥ ··· ≥ ≥ − responding unit-norm, pairwise orthogonal eigenvectors u ,...,u . In [5], 1 n weproved thatthe minimum of Q subject to(1) isk 1 k 1λ andisat- k − − i=−1 i tained by the representation such that the optimum vertex representatives P r ,...,r are row vectors of the matrix X = (D 1/2u ,...,D 1/2u ). ∗1 ∗n ∗ − 1 − k 1 Instead of X, the augmented n k matrix X˜ can as well be used, wh−ich × is obtained from X by inserting the column x = 1 of all 1’s. In fact, 0 x = D 1/2u , where u = √d is the eigenvector corresponding to the 0 − 0 0 eigenvalue 1 of D 1/2WD 1/2. Then − − Q = tr(D1/2X˜)T(I D 1/2WD 1/2)(D1/2X˜), k n − − − and minimizing Q on the constraint (1) is equivalent to minimizing the k above expression subject to X˜TDX˜ = I . This problem is the continuous k relaxation of minimizing Q (P ) = tr(D1/2X˜(P ))T(I D 1/2WD 1/2)(D1/2X˜(P )) k k k n − − k − over the set of k-partitions P = (V ,...,V ) of the vertices such that P is k 1 k k planted into X˜ in the way that the columns of X˜(P ) are so-called normal- k ized partition-vectors belonging to P . Namely, the coordinates of the ith k column are zeros, except those indexing vertices of V , which are equal to i 1 (i = 1,...,k). In fact, this is the normalized cut problem, which is √Vol(Vi) discussed in [20] for k = 2, further, in [3] and [6] for a general k, and the solution is based on the above continuous relaxation. (b) Now, let us maximize the normalized Newman–Girvan modularity of G in- 3 duced by P , defined in [9] as k k 1 k w(V ,V ) a a M (P ) = (w d d ) = 1 k k ij i j Vol(V ) − Vol(V ) − aX=1 a i,Xj∈Va aX=1 a over the set of the k-partitions of V. It is easy to see that M (P ) = k k k P k 1 Q (P ), and hence, the above task has the same spectral relaxation k k − − as the normalized cut problem. Let M = max M (P ) denote the k Pk∈Pk k k maximum k-way normalized Newman-Girvan modularity of the weighted graph G. (c) Finally, from the above considerations it is straightforward that M k ≤ k 1λ , or equivalently, the minimum normalized k-way cut is at least the i=−1 i the sum of the k 1 smallest positive normalized Laplacian eigenvalues. P − As for the minimum normalized k-way cut, in [6] we also gave an upper estimate by constant times the sum of the k 1 smallest positive normalized − Laplacian eigenvalues, which constant depends on the so-called k-variance of the vertex representatives defined in the following way. k S2(X) = min S2(X,P ) = min d r c 2 (2) k Pk∈Pk k k Pk=(V1,...,Vk)aX=1jX∈Va jk j − ak where c = 1 d r is the weighted center of cluster V and r , a Vol(Va) j∈Va j j a 1 ...,rn Rk−1 are rPows of X. (The augmented X˜ would give the same k- ∈ variance.) The constant of our estimation depended on S2(X ), and it was k ∗ close to 1 if this k-variance of the optimum (k 1)-dimensional vertex repre- − sentatives was small enough. Note that S2(X,P ) is the objective function k k of the weighted k-means algorithm. In thisway, we showed that large positive eigenvalues of the normalized modu- larity matrix are responsible for clusters with high intra- and low inter-cluster densities. Likewise, maximizing Q (P ) instead of minimizing over , small k k k P negative eigenvalues of the normalized modularity matrix are responsible for clusters with low intra- and high inter-cluster densities (see [9]). Our idea is that taking into account eigenvalues from both ends of the normalized modu- larityspectrum,wecanrecoverso-calledregularclusterpairs.Forthispurpose, we use the notion of volume regularity to be introduced in the next section. 3 Normalized modularity and volume regularity With the normalized modularity matrix, the well-known Expander Mixing Lemma (for simple graphs see, e.g., [17]) is formulated for edge-weighted graphs in the following way (see [8]). 4 Lemma 1 Provided Vol(V) = 1, for all X,Y V, ⊆ w(X,Y) Vol(X)Vol(Y) M Vol(X)Vol(Y), D | − | ≤ k k· q where M denotes the spectral norm of the normalized modularity matrix D k k of G = (V,W). Since the spectral gap of G is 1 M , a large spectral gap indicates small D −k k discrepancy as a quasi-random property discussed in [15]. If there is a gap not at the ends of the spectrum, we want to partition the vertices into clusters so that arelationsimilar totheabove property fortheedge-densities between the cluster pairs would hold. For this purpose, we use a slightly modified version of the volume regularity’s notion introduced in [2]. Definition 2 Let G = (V,W) be an edge-weighted graph with Vol(V) = 1. The disjoint pair A,B V is α-volume regular if for all X A, Y B we ⊆ ⊆ ⊆ have w(X,Y) ρ(A,B)Vol(X)Vol(Y) α Vol(A)Vol(B), | − | ≤ where ρ(A,B) = w(A,B) is the relative inter-clquster density of (A,B). Vol(A)Vol(B) In the ideal k-cluster case, let us consider the following generalized random simple graph model: given the partition (V ,...,V ) of V ( V = n), vertices 1 k | | i V and j V are connected with probability p , independently of each a b ab ∈ ∈ other, 1 a,b k. We can think of the probability p as the inter-cluster ab ≤ ≤ density of the pair (V ,V ). Since generalized random graphs can be viewed a b as edge-weighted graphs with a special block-structure burdened with random noise, based on [7], we are able to give the following spectral characterization of them. Fixing k, and tending with n to infinity in such a way that the cluster sizes grow at the same rate, there exists a positive number θ < 1, independent of n, such that for every 0 < τ < 1/2 there are exactly k 1 eigenvalues − of M greater than θ n τ, while all the others are at most n τ in abso- D − − − lute value. Further, the k-variance of the vertex representatives constructed by the k 1 transformed structural eigenvectors is (n 2τ), and the cluster − − O pairs are α-volume regular with any small α, almost surely. Note that gen- eralized quasirandom graphs defined in [18] are deterministic counterparts of generalized random graphs with the same spectral properties. Theorem 3 Let G = (V,W) be a connected edge-weighted graph on n ver- tices, with generalized degrees d ,...,d and degree matrix D. Assume that 1 n Vol(V) = 1, and there are no dominant vertices, i.e., d = Θ(1/n), i = i 1,...,n, as n . Let the eigenvalues of M , enumerated in decreasing D → ∞ absolute values, be 1 µ µ > ε µ µ = 0. 1 k 1 k n ≥ | | ≥ ··· ≥ | − | ≥ | | ≥ ··· ≥ | | The partition (V ,...,V ) of V is defined so that it minimizes the weighted 1 k 5 k-variance S2(X ) of the optimum vertex representatives – defined in (2) – k ∗ obtained as row vectors of the n (k 1) matrix X of column vectors D 1/2u , ∗ − i × − where u is the unit-norm eigenvector corresponding to µ (i = 1,...,k 1). i i − Assume that there is a constant 0 < K 1 such that V Kn, i = 1,...,k. ≤ k | i| ≥ With the notation s = S2(X ), the (V ,V ) pairs are (√2ks + ε)-volume k ∗ i j O regular (i = j) and forqthe clusters Vi (i = 1,...,k) the following holds: for 6 all X,Y V , i ⊂ w(X,Y) ρ(V )Vol(X)Vol(Y) = (√2ks+ε)Vol(V ), i i | − | O where ρ(V ) = w(Vi,Vi) is the relative intra-cluster density of V . i Vol2(Vi) i Note that, in Section 2, we indexed the eigenvalues of M in non-increasing D order and denoted them by λ’s. The set of all λ ’s is the same as that of all i µ ’s. Nonetheless, we need a different notation for the eigenvalues indexed in i decreasing order oftheir absolute values. Recall that1 cannot beaneigenvalue of M if G is connected. Consequently, µ = 1 can be if and only if µ = 1, D 1 1 | | − i.e., if G is bipartite. For example, if the conditions of the above theorem hold with k = 2 and µ = 1 ( µ ε, i 2), then our graph is a bipartite 1 i − | | ≤ ≥ expander discussed in [1] in details. For the proof we need the definition of the cut norm of a matrix (see e.g., [16]) and the relation between it and the spectral norm. Definition 4 Thecut normof the realmatrix Awith row-setRow andcolumn- set Col is kAk(cid:3) = R Rmowa,Cx Col(cid:12) aij(cid:12). ⊂ ⊂ (cid:12)(cid:12)iX∈RjX∈C (cid:12)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Lemma 5 For the m n real matrix A, (cid:12) (cid:12) × A (cid:3) √mn A , k k ≤ k k where the right hand side contains the spectral norm, i.e. the largest singular value of A. PROOF. x y A (cid:3) = max xTAy = max ( )TA( ) x y k k x 0,1 m,y 0,1 n| | x 0,1 m,y 0,1 n(cid:12) x y (cid:12)·k k·k k| ∈{ } ∈{ } ∈{ } ∈{ } (cid:12) k k k k (cid:12) √mn max xTAy = √mn A , (cid:12) (cid:12) (cid:12) (cid:12) ≤ x =1, y =1| | k k (cid:12) (cid:12) k k k k since for x 0,1 m, x √m, and for y 0,1 n, y √n. 2 ∈ { } k k ≤ ∈ { } k k ≤ 6 The definition of the cut norm and the result of the above lemma naturally extends to symmetric matrices with m = n, the spectral norm of which is the absolute value of the maximum absolute value eigenvalue. PROOF. (Theorem 3). Recall that the spectrum of D 1/2WD 1/2 differs − − from that of M only in the following: it contains the eigenvalue µ = 1 with D 0 corresponding unit-norm eigenvector u = √d instead of the eigenvalue 0 of 0 M with the same eigenvector. If G is connected, 1 is a single eigenvalue. The D optimum (k 1)-dimensional representatives of the vertices are row vectors of − the matrix X = (x ,...,x ), where x = D 1/2u (i = 1,...,k 1). The ∗ ∗1 ∗k 1 ∗i − i − representatives can as well be−regarded as k-dimensional ones, as inserting the vector x = D 1/2u = 1 will not change the k-variance s2 = S2(X ). Assume ∗0 − 0 k ∗ that the minimum k-variance is attained on the k-partition (V ,...,V ) of the 1 k vertices. By an easy analysis of variance argument (see [5]) it follows that k 1 s2 = − dist2(u ,F), (3) i i=0 X where F = Span D1/2z ,...,D1/2z with the so-called normalized partition 1 k { } vectors z ,...,z of coordinates z = 1 if j V and 0, otherwise 1 k ji √Vol(Vi) ∈ i (i = 1,...,k). Note that the vectors D1/2z ,...,D1/2z form an orthonormal 1 k system. By considerations proved in [5], we can find another orthonormal system v ,...,v F such that 0 k 1 − ∈ k 1 s2 − u v 2 2s2 (4) i i ≤ k − k ≤ i=0 X (v = u , since u F). We approximate the matrix D 1/2WD 1/2 = 0 0 0 − − ∈ n 1µ u uT by the rank k matrix k 1µ v vT with the following accuracy i=−0 i i i i=−0 i i i (in spectral norm): P P n 1 k 1 k 1 n 1 − µ u uT − µ v vT − µ u uT v vT + − µ u uT (5) (cid:13) i i i − i i i (cid:13) ≤ | i|· i i − i i (cid:13) i i i (cid:13) (cid:13)Xi=0 Xi=0 (cid:13) Xi=0 (cid:13) (cid:13) (cid:13)Xi=k (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) whi(cid:13)ch canbeestimated from(cid:13)above with k 1sinα +ε (cid:13)k 1 u v(cid:13) +ε i=−0 i ≤ i=−0 k i− ik ≤ √2ks+ε,whereα istheanglebetweenu andv ,andforit,sin αi = 1 u v i iP i P 2 2k i− ik holds, i = 0,...,k 1. − Based on these considerations and relation between the cut norm and the spectral norm (see Lemma 5), the densities to be estimated in the defining formula of volume regularity can be written in terms of stepwise constant vectors in the following way. The vectors y := D 1/2v are stepwise constants i − i on the partition (V ,...,V ), i = 0,...,k 1. The matrix k 1λ y yT is 1 k − i=−0 i i i therefore a symmetric block-matrix on k k blocks belonging to the above × P 7 partition of the vertices. Let wˆ denote its entries in the (a,b) block (a,b = ab 1,...,k). Using (5), the rank k approximation of the matrix W is performed with the following accuracy of the perturbation E: k 1 k 1 E = W D( − µ y yT)D = D1/2(D 1/2WD 1/2 − µ v vT)D1/2 . k k (cid:13) − i i i (cid:13) (cid:13) − − − i i i (cid:13) (cid:13) Xi=0 (cid:13) (cid:13) Xi=0 (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) Therefore, the entries of W – for i V , j V – can be decomposed as a b ∈ ∈ w = d d wˆ +η , where the cut norm of the n n symmetric error matrix ij i j ab ij × E = (η ) restricted to V V (otherwise it contains entries all zeros) and ij a b × denoted by E , is estimated as follows: ab kEabk(cid:3) ≤ nkEabk ≤ n·kD1a/2k·(√2ks+ε)·kD1b/2k Vol(V ) Vol(V ) n c a c b (√2ks+ε) ≤ ·v 1 V ·v 1 V · u a u b u | | u | | t n n t = c Vol(V ) Vol(V )(√2ks+ε) 1 a b · V · V · s| a| s| b| q q 1 c Vol(V ) Vol(V )(√2ks+ε) 1 a b ≤ · K q q = c Vol(V ) Vol(V )(√2ks+ε). a b q q Here the diagonal matrix D contains the diagonal part of D restricted to V , a a otherwise zeros, and the constant c does not depend on n. Consequently, for a,b = 1,...,k and X V , Y V : a b ⊆ ⊆ w(X,Y) ρ(V ,V )Vol(X)Vol(Y) = a b | − | Vol(X)Vol(Y) (d d wˆ +η ) (d d wˆ +η ) = (cid:12) i j ab ij − Vol(V )Vol(V ) i j ab ij (cid:12) (cid:12)(cid:12)iX∈XjX∈Y a b iX∈VajX∈Vb (cid:12)(cid:12) (cid:12) (cid:12) (cid:12) Vol(X)Vol(Y) (cid:12) (cid:12) η η 2c(√2ks+ε) Vol((cid:12)V )Vol(V ), (cid:12) ij − Vol(V )Vol(V ) ij(cid:12) ≤ a b (cid:12)(cid:12)iX∈XjX∈Y a b iX∈VajX∈Vb (cid:12)(cid:12) q (cid:12) (cid:12) (cid:12) (cid:12) that g(cid:12)ives the required statement both in the(cid:12)a = b and a = b case. 2 6 Note that in the k = 2 special case, due to a theorem proved in [5], the 2-variance of the optimum 1-dimensional representatives can be directly esti- mated from above by the gap between the two largest absolute value eigen- values of M , and hence, the statement of Theorem 3 simplifies, see [8]. For D a general k, we can make the following considerations. 8 Assume that the normalized modularity spectrum (with decreasing absolute values) of G = (V,W) satisfies 1 µ µ θ > ε µ µ = 0. 1 k 1 k n ≥ | | ≥ ··· ≥ | − | ≥ ≥ | | ≥ ··· ≥ | | Our purpose is to estimate s with the gap δ := θ ε. We will use the notation − of the proof of Theorem 3 and apply the results of [4] for the perturbation of spectral subspaces of the symmetric matrices n 1 k 1 A = − µ u uT and B = − µ v vT i i i i i i i=0 i=0 X X inthefollowingsituation.ThesubsetsS = µ ,...,µ andS = µ ,...,µ 1 k n 1 2 0 k 1 oftheeigenvaluesofD 1/2WD 1/2 aresepar{atedbyan−an}nulus,wher{edist(S ,−S}) = − − 1 2 δ > 0. Denote by P and P the projections onto the spectral subspaces of A B A and B spanned by the eigenvectors corresponding to the eigenvalues in S 1 and S , respectively: 2 n 1 k 1 P (S ) = − u uT, P (S ) = − v vT. A 1 j j B 2 i i j=k i=0 X X Then Theorem VII.3.4 of [4] implies that 1 P P P (A B)P , (6) A B F A B F k k ≤ δk − k where . denotes the Frobenius norm. On the left hand side, P P = F A B F k k k k k 1sin2α , and in view of u v = 2sin αi and (4), this is between √3s i=−0 i k i− ik 2 2 qand s. On the right hand side, P k 1n 1 P AP P BP = (P A)P P (P B) = − − (µ µ )uT(u v )u vT, A B− A B A B− A B j− i j i− i j i i=0 j=k X X where the Frobenius norm of the rank 1 matrices u vT is 1, and the inner j i product uT(u v ) is the smaller if the u ’s and the v ’s are the closer (i = j i − i i i 1,...,k 1). Therefore, by the inequality (6), s is the smaller if δ is the larger − and the µ µ differences for i = 0,...,k 1; j = k,...,n 1 are closer j i | − | − − to δ. If µ = ε is small, then µ ,..., µ should be close to each other k 1 k 1 | | | | | − | (µ = 1 does not play an important role because of u = v ). 0 0 0 4 Testability of the normalized modularity spectrum and eigen- subspaces Authors of [12] defined the testability of simple graph parameters and proved equivalent notions of this testability. They also anticipated that their results 9 remainvalidif theyconsider weighted graphsequences (G )with edge-weights n in the [0,1] interval and no dominant vertex-weights α (G ) > 0 (i = 1,...,n), i n i.e., max αi(Gn) 0asn ,where α = n α (G ). Tothisend, in[11], i αGn → → ∞ Gn i=1 i n we slightly modified the definition of a testable graph parameter for weighted P graphs in the following way. Definition 6 A weighted graph parameter f is testable if for every ε > 0 there is a positive integer m < n such that if G satisfies max αi(Gn) 1, then n i αGn ≤ m P( f(G ) f(η(m,G )) > ε) ε, n n | − | ≤ where η(m,G ) is a random simple graph on m vertices selected randomlyfrom n G in the following manner: m vertices of G are selected with replacement, n n with respective probabilities proportional to the vertex-weights; given the se- lected vertex-subset, the edges come into existence conditionally independently, with probabilities of the edge-weights. By the above definition, a testable weighted graph parameter can be consis- tently estimated based on a fairly large sample. Based on the results of [12] for simple graphs, in [11], we established equivalent statements of this testability, from among which we will use the following. Fact 7 Let f be a testable weighted graph parameter. Then for every conver- gent weighted graph sequence (G ), with no dominant vertex-weights, f(G ) n n is also convergent as n . → ∞ The notion of the convergence of a weighted graph sequence is defined in [12], where the authors also describe the limit object as a symmetric, measurable function W : [0,1] [0,1] [0,1], called graphon. The so-called cut distance × → between the graphons W and U is δ(cid:3)(W,U) = infν W Uν (cid:3), where the cut k − k norm of the graphon W is defined by W (cid:3) = sup W(x,y)dxdy , k k | | S,T⊂[0,1] ZS×T and the above infimum is taken over all measure preserving bijections ν : [0,1] [0,1], while Uν denotes the transformed U after performing the same → measure preserving bijection ν on both sides of the unit square. Graphons are considered modulo measure preserving maps, and under graphon the whole equivalence class is understood. In this way, to a convergent weighted graph sequence (Gn), there is a unique limit graphon W such that δ(cid:3)(Gn,W) 0 → as n → ∞, where δ(cid:3)(Gn,W) is defined as δ(cid:3)(WGn,W) with the step-function graphon W assigned to G in the following way: the sides of the unit square Gn n are divided into intervals I ,...,I of lengths α (G )/α ,...,α (G )/α , 1 n 1 n Gn n n Gn and over the rectangle I I the stepfunction takes on the value w (G ). i j ij n × 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.