ebook img

A linear k-fold Cheeger inequality PDF

0.15 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A linear k-fold Cheeger inequality

A LINEAR k-FOLD CHEEGER INEQUALITY FRANKLINKENTERANDMARYRACLIFFE Abstract. GivenanundirectedgraphG,theclassicalCheegerconstant,hG,measurestheoptimalpartition 5 oftheverticesinto2partswithrelativelyfewedgesbetweenthembaseduponthesizesoftheparts. Thewell- 1 known Cheeger’s inequality states that 2λ1 hG √2λ1 where λ1 is the minimum nontrivial eigenvalue ≤ ≤ 0 ofthenormalizedLaplacianmatrix. 2 Recent work has generalized the concept of the Cheeger constant when partitioning the vertices of a graph into k > 2 parts. While there are several approaches, recent results have shown these higher-order b Cheegerconstantstobetightlycontrolledbyλk−1,the(k 1)thnontrivialeigenvalue,towithinaquadratic e factor. − F Wepresentanewhigher-orderCheegerinequalitywithseveralnewperspectives. First,weuseanalter- 7 native higher-order Cheeger constant which considers an “average case” approach. We show this measure 2 is related to the average of the first k 1 nontrivial eigenvalues of the normalized Laplacian matrix. Fur- − ther,usingrecenttechniques,ourresultsprovidelinearinequalitiesusingthe -normsofthecorresponding ] eigenvectors. Consequently, unlikepreviousresults,thisresultisrelevanteve∞nwhenλk−1 1. → O C . h t 1. Introduction a m Let G = (V,E) be an undirected graph, and let = D 1/2(I A)D 1/2 be the normalized Laplacian − − L − [ of G with eigenvalues 0 = λ λ ... λ . It is a basic fact in spectral graph theory that λ = 0 0 1 n 1 k 1 ≤ ≤ ≤ − − if and only if G has at least k connected components. Additionally, if λ 0 then the vertices of G can 2 1 ≈ be partitioned into 2 parts, nearly disconnected from one another. This is formalized through the Cheeger v 1 constant and the Cheeger inequality. 4 The classical Cheeger constant is defined as 7 1 e(S,S) h = inf h(S), where h(S)= , 0 G S V(G) min Vol(S),Vol S . ⊂ { } 1 0 wheree(S,S)isthenumberofedgesbetweenS anditscomplement,andVol((cid:0)S)(cid:1)isthesumofvertexdegrees 5 in S. The classicalCheeger inequality relates h to the first eigenvalue of the normalized Laplacianmatrix, G 1 as follows. : v Cheeger Inequality (see for example [3]). Let λ be the first nontrivial eigenvalue of a connected graph i 1 X G. Then r λ1 a hG 2λ1. 2 ≤ ≤ p Recently,somestrengtheningsofCheeger’sinequalityhaveappeared(see,forexample,[2,7,8]). Moreover, several recent results have generalized to a so-called “higher order” Cheeger constant (see, for example [9, 10, 11]) by considering a partition of V(G) into k >2 parts. While there are several different definitions of a kth order Cheeger constant, one approach is to define the k-fold cheeger constant to be hˆ(k) =infmaxh(S ) G i i S where the infimum ranges over all partitions of vertices = S ,S ,...S . In this case, we have: 1 2 k S { } Higher-Order Cheeger Inequality (Lee, GharanandTrevisan,[9]). Letλ be the (k 1)th nontrivial k 1 − − eigenvalue of a connected graph G. Then λ k2−1 ≤hˆ(Gk) ≤O(k2) 2λk−1. 1 p ThisresultformallydemonstratesthatifGcanbe partitionedinto k partswhicharenearlydisconnected ˆ from one another, then λ 0. Similar results for a variant of h(k) can be found in [10]. k ≈ G The Cheeger constant and associatedspectral information can be used to find clusters in graphs; that is, subgraphs that are highly connected. This has been a topic of wide interest in both the mathematics and computer science literature (see, for example, [4, 5, 9, 11, 12, 13, 14], among many others). This article expands upon previous work on the Cheeger constant in two ways. First, we work with the followingnewnotionofak-foldCheegerconstant. Foragivenpartition = S ,S ,...,S ofV(G),define 1 2 k S { } the Cheeger constant of the partition, h(k)( ), to be G S 1 e(S ,S ) h(k)( )= i j . G S k min Vol(S ),Vol(S ) i j i=j { } X6 We then define the kth Cheeger constant of G to be h(k) = inf h(k)( ). Specifically, while previous work G S G S focused on generalizing the Cheeger constant using a “worst case” approach, we consider the alternative “averagecase”approach. That is, ˆh(k) requiresallsets in a partitionto havea smallCheegerratio,whereas G h(k) can be small even if a small number of the sets have a large Cheeger ratio. We here reproduce a lower G bound for h(k) that agrees with the standard Cheeger inequality when k = 2. Second, we extend upon G previous work of the first author [7] which gives a linear upper bound at the expense of using eigenvector norms. We prove: Theorem 1. Fix a constant k. Let G be an undirected graph on n vertices, with maximum degree ∆, and suppose there exists a constant β >0 such that ∆ =o n β . Let 0=λ λ λ be the first Vol(G) − 0 ≤ 1 ≤···≤ k−1 k eigenvalues of , with corresponding harmonic eigenvectors x ,x ,...,x , and suppose that λ 1. Let α = max{kxiLk∞ | i = 1,2,...,k−1}, and let Λ = k1 (cid:0) ik=−11(cid:1)0(1−1λi). Tkh−e1n the k-fold Cheegerkc−on1s≤tant h(k) satisfies G P 1 Λ 1 1 (k 1)Λ h(k) − (1+o(1)). 2 − 2 ≤ G ≤ 2 − 4k − 4Vol(G)α2 (cid:20) (cid:21) In addition, if we do not have λ 1, we have the following related theorem. k 1 − ≤ Theorem 2. Fix a constant k. Let G be an undirected graph on n vertices, with maximum degree ∆, and suppose there exists a constant β >0 such that ∆ =o n β . Let 0=λ λ λ be the first Vol(G) − 0 ≤ 1 ≤···≤ k−1 lketeΛige=nvk1alueiks=−1o1f(1L−, wλiit)h. Tcohrernestphoendki-nfogldhaCrhmeoegneicr ceoignesntavnectt(cid:0)ohr(Gsk)x(cid:1)s0a,txis1fi,e.s..,xk−1. Let α = Pik=−11kxik∞, and P 1 Λ 1 1 (k 1)Λ h(k) − (1+o(1)). 2 − 2 ≤ G ≤ 2 − 4k − 4Vol(G)α2 (cid:20) (cid:21) These results have severalinteresting features. First, the upper bounds of previous higher-order Cheeger inequalities are generally not applicable when λ 1/k2 (consider the complete graph, for example). k ≫ In contrast, under mild conditions, Theorem 1 or 2 applies even if λ 1/k2. Additionally, the result k ≫ demonstrates that the “average case” k-fold Cheeger constant is tightly controlled by the average of the first k 1 nontrivial eigenvalues whereas the previous “worstcase” approaches tightly control hˆ(k) with λ . − G k Finally, andperhaps mostinteresting,Theorem1 showsthat the Cheegerratiocanbe elegantlybounded to within a linear factor of the corresponding eigenvalues when the eigenvector norms are considered. We note that although the bound in Theorem 2 appears much weaker than that of Theorem 1, it is in fact not necessarily weaker at all. Indeed, if Λ is negative, that is, if the averageof the first k 1 nontrivial − eigenvaluesisgreaterthan1,thenTheorem2 givesa strongerboundthanTheorem1, asthe terminvolving Λ will be positive in this case. Indeed, as seen in Section 5, the bound given in Theorem 2 is quite good for the complete graph K . n We present this article as follows. In Section 2, we give relevant background and definitions. Then, we prove the lower bounds of Theorems 1 and 2 in Section 3 and the upper bounds in Section 4. Finally, we conclude with the example of applying our result to K in Section 5. n 2 2. Preliminaries ToprovetheupperboundinTheorem1,weshallusetoolsfrombothprobabilitytheoryandgraphtheory. To begin, we define our graph-theoretic notation. Given a graphG, define the adjacency matrix A to be the square matrix, indexed by V(G), with A = u,v 1 , the indicator of whether u,v E(G). Define D to be the diagonal matrix indexed by V(G) with u v D∼ = deg (u). For simplicit{y of}n∈otation, if the graph is understood, we write d = deg (u). The u,u G u G normalized Laplacian matrix, , is given by = D 1/2(I A)D 1/2. By convention, if G has an isolated − − L L − vertex u, set (D 1/2) = 0. The eigenvalues of will be written as 0 = λ λ λ . For a − u,u 0 1 n 1 matrix or vector X, we let X denote the conjugateLtranspose of X. ≤ ≤ ··· ≤ − ∗ For a subset S V(G), define Vol(S)= d . Define Vol(G)=Vol(V(G)). Write to denote the vector indexed by⊂V(G) with (v)=1 , thve∈Sindvicator of whether v is an element of S.1FSor convenience S v S 1 ∈ P of notation, we write = . Given two subsets S,T V(G), define e(S,T) to be the number of edges V(G) 1 1 ⊂ with one incident vertex in S and the other incident vertex in T. It is a standard exercise in graph theory to verify the following: Proposition 1. For S,T subsets of V(G), we have e(S,T)=(D1/2 S)∗(I )(D1/2 T). 1 −L 1 Given an eigenvalue-eigenvector pair (λ,v) for , define the harmonic eigenvector corresponding to λ L to be D 1/2v. Harmonic eigenvectors can be a useful tool for analyzing the normalized Laplacian. More − information about harmonic eigenvectors and their uses can be found in [3], for example. For any graph G, there is alwaysan orthonormalbasis of eigenvectorsv ,v ,...,v ; we shall assume throughoutthat such 0 1 n 1 − a basis has been chosen, and the harmonic eigenvectors used have been derived from this basis, so that the harmoniceigenvectorcorrespondingtoλ willbepreciselyD 1/2v . Wenotethatv =(Vol(G)) 1/2D1/2 . i − i 0 − 1 In addition, we shall require the following tools from probability theory. Given a random variable X, we use E[X] to denote the expected value of X. If A is an event in a probability space, we use 1 to denote A the 0 1 indicator random variable for A. Given a matrix M whose entries are all random variables, E[M] − denotes the matrix of entry-wise expectation. We shall use the following result from [7]: Proposition 2. Let x Cn be a random vector whose entries are pairwise independent, and let µ =E[x]. ∈ If A is an n n symmetric matrix with A =0 for all i, then ii × E[x Ax]=µ Aµ. ∗ ∗ Given a random variable X, we say X has a Bernoulli distribution with parameter p if P(X = 1) = p, and P(X =0)=1 p. We write X Ber(p). − ∼ Inaddition,weshallmakeuseofChernoffbounds. Chernoffboundsareaclassofconcentrationinequalities that consider sums of independent random variables. Often the variables considered are Bernoulli, though that may not be the case here. There are many versions of Chernoff bounds (see, for example, [1]); we shall use the following: Proposition 3. For i =1,2,...,k, let X be a nonnegative random variable with X ∆. Let S = X , i i i and let µ=E[S]. Then for any ε>0, ≤ P ε2µ P(S µ >εµ) 2exp − . | − | ≤ 3∆ (cid:18) (cid:19) In addition, to prove the lower bound in Theorems 1 and 2, we shall make use of the following linear algebra theorem. This result can be derived as a corollary of the Courant-Fischer Theorem, and can be found, for example, as Corollary 4.3.18 in [6]. Theorem 3. Let M be an n n Hermitian matrix with eigenvalues λ λ λ . Fix k n, and 0 1 n 1 let denote the set of n ×k complex matrices with orthonormal colum≤ns. T≤h·e·n·≤ − ≤ n,k U × k 1 − λ = min tr(U MU). i ∗ U i=0 ∈Un,k X 3 Rephrased, this theorem states that if f ,f ,...,f is a collection of orthonormalvectors in Cn and M is 1 2 k a Hermitian matrix, then k k 1 − f Mf λ . i∗ i ≥ i i=1 i=0 X X 3. The Lower Bound In this section, we prove the lower bound from Theorems 1 and 2. Recall that Λ = k1 ik=−11(1−λi), the average of the largest k eigenvalues of I . −L P Theorem 4. Given k 2, ≥ 1 Λ h(k) . G ≥ 2 − 2 Proof. Let = S ,S ,...,S beapartitionofthe verticesofG. For1 j k,letg =D1/2 1 . S { 1 2 k} ≤ ≤ j √Vol(Si)1Si NotethatastheS arealldisjoint,wehavethattheset g isorthogonal. Moreover, g 2 = dv = i { j} k jk v∈Si Vol(Si) 1, and hence the set g is in fact orthonormal. Then by Theorem 3, we have { j} P k k 1 − (g g ) λ =k(1 Λ). i∗L i ≥ i − i=1 i=0 X X On the other hand, by Proposition 1, we have e(S ,S ) g g =g g g (I )g =1 i i . i∗L i i∗ i− i∗ −L i − Vol(S ) i Combining these two results yields k 1 e(S ,S ) i i 1 Λ 1 − ≤ k i=1(cid:18) − Vol(Si)(cid:19)! X k 1 e(S ,S )+e(S ,S ) e(S ,S ) i i i i i j = 1 + k   − Vol(S ) Vol(S ) i i i=1 j=i X X6    k 1 Vol(S ) e(S ,S ) i i j = 1 + k   − Vol(S ) Vol(S ) i i i=1 j=i X X6 1  e(S ,S ) e(S ,S )  i j i j = + k Vol(S ) Vol(S ) j=i(cid:18) i j (cid:19) X6 2h(k), ≤ G as desired. (cid:3) 4. The Upper Bound The proof of the upper bound in Theorem 1 is modeled after the proof of a similar upper bound by the firstauthorin[7]whenk=2. Thestrategyemployedistochooseavectorrandomlysothattheexpectation of this vector has useful algebraic properties, and apply concentration results for the expectation. Proof of Theorem 1, upper bound. Let x ,x ,...,x be the first k 1 nontrivial harmonic eigenvectors 1 2 k 1 for G, with corresponding eigenvalues λ ,λ ,...,λ− . As above, w−e shall assume that x = D 1/2v , 1 2 k 1 i − i where the set v ,v ,...,v is an orthonormal bas−is for Rn composed of eigenvectors of . Recall that 0 1 n { } L v = 1 D1/2 , and hence for each i, 0 √Vol(G) 1 x (v)d =x D = Vol(G)(D 1/2v ) D1/2v = Vol(G)v v =0, i v ∗i 1 − i ∗ 0 i∗ 0 v∈XV(G) p p so each x is orthogonalto D . Let α=max x j =1,2,...,k . i j 1 {k k∞ | } 4 Let δ >0 be a constant that will be defined later. For all v V(G), define a random variable s by v ∈ s =j with probability 1 2δ + xj(v) for j =1,2,...,k 1 v 2(k−−1) 2(k−1)kxjk∞ − s =k otherwise v Note that the random variables for different vertices are independent. For each j = 1,2,...,k, define S = v V s = j . Thus we can view the random variables s as partitioning the vertices of G into j v v { ∈ | } k sets. Let w = , the indicator vector for the set S . As the choice of set S for each vertex v is j 1Sj j j independent of each other vertex, we have that the entries in w are pairwise independent (although any j pair w , w are not independent). Notice that for j =1,2,...,k 1, we have j ℓ − E[Vol(S )] = P(v S )d j j v ∈ v V X∈ 1 2δ 1 = − d + x (v)d 2(k 1) v 2(k 1) x j v vX∈V (cid:18)(cid:18) − (cid:19) − k jk∞ (cid:19) 1 2δ = − Vol(G)=:µ, 2(k 1) − by orthogonality of x to D . Note that µ is independent of the choice of j. j 1 On the other hand, for a given j k 1, we can view the vertices in S as chosen by a sequence of j ≤ − independent Bernoulli random variables, X , where X Ber 1 2δ + xj(v) . Thus, Vol(S ) = v v ∼ 2(k−−1) 2(k−1)kxjk∞ j v V(G)dvXv, and by Proposition 3, we have that for all ε>0,(cid:16) (cid:17) ∈ P ε2µ P( Vol(S ) µ >εµ) 2exp − , j | − | ≤ 3∆ (cid:18) (cid:19) where ∆ is the maximum degree of a vertex in G. LetAbe the eventthat(1 ε)µ<Vol(S )<(1+ε)µ forall1 j k 1. Bythe unionbound,wehave j − ≤ ≤ − ε2µ P(Ac) 2(k 1)exp − . ≤ − 3∆ (cid:18) (cid:19) Thus for each j, we have E[e(S ,S )] = E[e(S ,S )1 +e(S ,S )1 ] j j j j A j j A ε2µ E[e(S ,S )1 ]+Vol(G) 2(k 1)exp − j j A ≤ − 3∆ (cid:18) (cid:18) (cid:19)(cid:19) e(S ,S ) ε2µ = E j j Vol(S )1 +2(k 1)Vol(G)exp j A Vol(S ) − −3∆ (cid:20) j (cid:21) (cid:18) (cid:19) e(S ,S )+e(S ,S ) e(S ,S ) ε2µ = E j j j j − j j Vol(S )1 +2(k 1)Vol(G)exp j A Vol(S ) − −3∆ (cid:20) j (cid:21) (cid:18) (cid:19) Vol(S ) e(S ,S ) ε2µ = E j − j j Vol(S )1 +2(k 1)Vol(G)exp . j A Vol(S ) − −3∆ (cid:20) j (cid:21) (cid:18) (cid:19) Using linearity of expectation, we can sum over j to obtain 5 k 1 k 1 E − e(S ,S ) − E Vol(Sj)−e(Sj,Sj)Vol(S )1 +2(k 1)Vol(G)exp ε2µ/(3∆) j j j A   ≤ Vol(S ) − − j=1 j=1(cid:20) (cid:20) j (cid:21) (cid:21) X X (cid:0) (cid:1)   k 1 (1+ε)µ E − 1 e(Sj,Sj) 1 +2(k 1)2Vol(G)exp ε2µ/(3∆) A ≤  − Vol(S )  − − j=1(cid:18) j (cid:19) X (cid:0) (cid:1)   k 1 k = (1+ε)µ E k 1 − e(Sj,Si) 1 +2(k 1)2Vol(G)exp ε2µ/(3∆)  − − Vol(S ) A − − j j=1i=1  XXj=i   (cid:0) (cid:1) 6    k 1 = (1+ε)µ E k 1 e(Sj,Si) + e(Si,Sj) + − e(Sj,Sk) 1 A  − − Vol(S ) Vol(S ) Vol(S )  i=j(cid:18) j i (cid:19) j=1 k X6 X +2(k 1)2Vol(G)exp ε2µ/(3∆)   − − (cid:0) 2e(cid:1)(S ,S ) 1 ε (1+ε)µ E k 1 j i − +1 1 A ≤  − − min Vol(S ),Vol(S ) 1+ε   i j i=j { } X6 +2(k 1)2Vol(G)exp ε2µ/(3∆)   − − 1 ε (1+ε)µ k 2kh(k) −(cid:0) +2(k (cid:1)1)2Vol(G)exp ε2µ/(3∆) . ≤ − G 1+ε − − (cid:18) (cid:19) On the other hand, we also have w = , (cid:0) (cid:1) j 1Sj 1 2δ 1 E[w ]= − + x , j 2(k 1) 1 2 x j (cid:18) − (cid:19) k jk∞ and thus, as the entries in w are independent as noted above, by Proposition 2, we have j E[e(S ,S )] = E (D1/2w ) (I )(D1/2w ) j j j ∗ j −L h i 1 2δ 1 ∗ 1 2δ 1 = − + x D1/2(I )D1/2 − + x 2(k 1) 1 2 x j −L 2(k 1) 1 2 x j (cid:18)(cid:18) − (cid:19) k jk∞ (cid:19) (cid:18)(cid:18) − (cid:19) k jk∞ (cid:19) 2 1 2δ 1 λ j = − Vol(G)+ − . 2(k 1) 4 x 2 (cid:18) − (cid:19) k jk∞ We therefore obtain k 1 2 − 1−2δ Vol(G)+ 1−λj (1+ε)µ k 2kh(k)1−ε +2(k 1)2Vol(G)exp ε2µ/(3∆) . Xj=1 (cid:18)2(k−1)(cid:19) 4kxjk2∞!≤ (cid:18) − G 1+ε(cid:19) − (cid:0)− (cid:1) Recall that ∆ =o n β by hypothesis; choose δ =ε=n β/3. Then we have Vol(G) − − (cid:0) (cid:1) n 2β/3(1 2n β/3)Vol(G) exp ε2µ/(3∆) =exp − − − exp( C(1 o(1))nβ/3), − − 6(k 1)∆ ≤ − − (cid:18) − (cid:19) (cid:0) (cid:1) for C an appropriate constant. As Vol(G) n2, we thus have that the error term satisfies ≤ 2(k 1)2Vol(G)exp ε2µ/(3∆) 2(k 1)2n2exp( C(1 o(1))nβ/3)=o(1). − − ≤ − − − Moreover,as xj α and 1(cid:0) λj 0 for(cid:1)all j, we have k k∞ ≥ − ≥ k 1 − 1 λj kΛ (1) − . 4 x 2 ≥ 4α2 j Xj=1 k k∞ Therefore, 6 k 1 2 − 1−2δ Vol(G)+ 1−λj (1+ε)µ k 2kh(k)1−ε +o(1) Xj=1 (cid:18)2(k−1)(cid:19) 4kxjk2∞! ≤ (cid:18) − G 1+ε(cid:19) (1 2δ)2 kΛ Vol(G) − Vol(G)+ (1+o(1)) k 2kh(k)(1 o(1)) +o(1) 4(k 1) 4α2 ≤ 2(k 1) − G − − − (cid:16) (cid:17) Solving for h(k) yields G 1 1 (k 1)Λ h(k) − (1+o(1)), G ≤ 2 − 4k − 4Vol(G)α2 (cid:20) (cid:21) as desired. (cid:3) Toobtainthe proofofTheorem2, we note thatthe only inequality thatfails abovewhenλ >1 is (1). k 1 − To correct for this problem, we shall slightly modify the definition of the random variable s for each v. v Proof of Theorem 2, upper bound. As in the proof of Theorem 1, Let x ,x ,...,x be the first k 1 1 2 k 1 − − nontrivial harmonic eigenvectors for G, with corresponding eigenvalues λ λ λ . Let α = 1 2 k 1 ik=−11kxik∞. Let δ >0, and for all v ∈V(G), define a random variable sv by≤ ≤ ··· ≤ − P s =j with probability 1 2δ + xj(v) for j =1,2,...,k 1 v 2(k− 1) 2(k 1)α − . s =k otherwise − − v Proceed with the proof as in Theorem 1, noting that we can replace inequality (1) with the equality k 1 − 1 λj kΛ − = , 4α2 4α2 j=1 X and that all else is unchanged. The result then follows. (cid:3) 5. Example As an example of an application of Theorem2, we consider the complete graphK . Take k to be a fixed n constant, and we shall consider the asymptotics of h(k) as n . G →∞ As with the standard Cheeger constant, it is quite clear to see that h(k)( ) will be minimized when the G S S are roughly an equipartition of n, that is, when there are exactly r = nmodk sets of size n and the i k rest are of size n . Letting be such a partition, we have k S (cid:6) (cid:7) h(k(cid:4)) (cid:5)= 1 |Si||Sj| Kn k min (n 1)S ,(n 1)S i j i=j { − | | − | |} X6 1 r n k r n k r k r n = + − + − k(n 1) 2 k 2 k 2 − 2 − 2 k − (cid:18)(cid:18) (cid:19)l m (cid:18) (cid:19)j k (cid:18)(cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19)(cid:19)l m(cid:19) 1 k n ∼ k(n 1) 2 k − (cid:18)(cid:18) (cid:19) (cid:19) 1 1 . ∼ 2 − 2k Recall that the Laplacian eigenvalues of K are λ = 0 and λ = n for all i > 0. Therefore, the lower n 0 i n 1 bound given in Theorem 1 yields h(k) 1 (k 1) n 1 1 , a tru−e estimate for the Cheeger ratio. Kn ≥ 2k − n 1 ∼ 2 − 2k Moreover,Λ= k 1( 1 ). Also, we have v , the−eigenvector corresponding to 0, is given by v = 1 . −k −n 1 0 0 √n1 Thus any vector perpendi−cular to v is an eigenvector for n . Note that as Λ is negative here, we wish 0 n 1 to maximize α2 in order to minimize the upper bound. Th−us we take v = 1 (e e ). It is clear i √2 2i−1 − 2i 7 that these are orthonormal. In addition, this implies that the harmonic eigenvectors are x = D 1/2v = i − i 2 √2(1n−1)(e2i−1−e2i), and hence α2 =(cid:18)√2k(−n1−1)(cid:19) = 2(k(n−−1)12). Therefore, we obtain as the upper bound from Theorem 2 1 1 (k 1)Λ h(k) − (1+o(1)) Kn ≤ 2 − 4k − 4Vol(G)α2 (cid:20) (cid:21) = 12 − 41k − (k4n−(n1)k−k11)((−k−n1−1)21)(1+o(1)) − 2(n 1) − 1 1  = (1+o(1)), 2 − 4k (cid:18) (cid:19) compared to the true constant 1 1 . 2 − 2k References [1] N.Alon andJ.Spencer,The Probabilistic Method,JohnWiley&Sons,3ed.,2008. [2] F. R. Chung, Laplacians of graphs and cheegers inequalities, in Proc. Int. Conf. Combinatorics, Paul Erdos is Eighty, Keszthely(Hungary),vol.2,1993,p.116. [3] ,Spectral graph theory,vol.92,AmericanMathematical Soc.,1997. [4] M. Girvan and M. E. Newman, Community structure in social and biological networks, Proceedings of the National AcademyofSciences, 99(2002), pp.7821–7826. [5] F. C. Graham and A. Tsiatas, Finding and visualizing graph clusters using pagerank optimization, in Algorithms and ModelsfortheWeb-Graph,Springer,2010,pp.86–97. [6] R. Horn andC.Johnson,Matrix Analysis,CambridgeUniversityPress,1985. [7] F. Kenter,A linear cheegerinequality using eigenvectornorms,JournalofCombinatorics,(toappear). [8] T. C. Kwok, L. C. Lau, Y. T. Lee, S. Oveis Gharan, and L. Trevisan, Improved cheeger’s inequality: analysis of spectralpartitioningalgorithmsthroughhigherorderspectralgap,inProceedingsoftheforty-fifthannualACMsymposium onTheoryofcomputing,ACM,2013,pp.11–20. [9] J.R. Lee, S. OveisGharan,andL. Trevisan,Multi-way spectral partitioning and higher-order cheegerinequalities,in Proceedings oftheforty-fourthannualACMsymposiumonTheoryofcomputing,ACM,2012,pp.1117–1130. [10] A. Louis, P. Raghavendra,P.Tetali, andS.Vempala,Algorithmic extensions of cheegers inequality to higher eigen- values and partitions, in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Springer,2011,pp.315–326. [11] ,Many sparse cutsviahighereigenvalues,inProceedingsoftheforty-fourthannualACMsymposiumonTheoryof computing, ACM,2012,pp.1131–1140. [12] A. Y. Ng, M. I. Jordan, Y. Weiss, et al., On spectral clustering: Analysis and an algorithm, Advances in neural informationprocessingsystems,2(2002), pp.849–856. [13] S.E. Schaeffer,Graph clustering,Computer ScienceReview,1(2007), pp.27–64. [14] D.A.SpielmanandS.-H.Teng,Alocalclusteringalgorithmformassivegraphsanditsapplicationtonearly-lineartime graph partitioning,arXivpreprintarXiv:0809.3232, (2008). 8

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.