ebook img

ISOMAP using Nystrom Method with Incremental Sampling PDF

8 Pages·01.472 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ISOMAP using Nystrom Method with Incremental Sampling

ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang ISOMAP using Nyström Method with Incremental Sampling 1*Hong Yu, 2Xiaowei Zhao, 3Xianchao Zhang,4Yuansheng Yang 1*School of Electronics & Information Engineering, Dalian University of Technology;Software School, Dalian University of Technology, Dalian,China 2,3 Software School, Dalian University of Technology, Dalian,China 4 School of Electronics & Information Engineering, Dalian University of Technology, Dalian,China {hongyu, vivian_dlut, xczhang,yangys}@dlut.edu.cn Abstract Large dimensionality leads to intractable complexity to machine learning algorithms. ISOMAP is a typical manifold learning technique which extracts intrinsic low-dimensional structure from high dimensional data. Since the complexity of eigen-decomposition in ISOMAP is O(n3) , ISOMAP is coupled with Nyström method when it is used in large scale manifold learning. The landmark point set is an important factor for the approximation of Nyström method. In this paper, we present an incremental sampling scheme. Experimental results show that the Nyström extension with incremental sampling performs better than with random sampling. Keywords: ISOMAP, Matrix Approximation, Incremental Sampling 1. Introduction Large dimensionality is a significant problem for machine learning algorithms. Dimensionality reduction techniques [1][2] address this problem by extracting intrinsic low- dimensional structure from high dimensional data. These techniques typically try to unfold the underlying manifold by minimizing the loss of information. So in the embedding space, certain applications such as K-means clustering are more effective. Linear dimensionality reduction techniques such as Principal Component Analysis (PCA) and Multidimensional Scaling (MDS) have been widely used in many problems and have shown their good performance to extract low dimensional structures from high dimensional data points. But these methods basically work well when the data is linear in nature[3][4]. In contrast to linear dimensionality reduction techniques, manifold learning methods provide more powerful non-linear dimensionality reduction by preserving the local and global structure. LLE(Locally Linear Embedding) and ISOMAP(Isometric Feature Mapping) are two typical manifold learning techniques. LLE defines some linear coefficients to best approximate every data point from its neighbors in the high dimensional space and then finds the low dimensional points which can be linearly approximated by its neighbors with the defined coefficients. Different from LLE, ISOMAP [1] argues that only the geodesic distance reflects the intrinsic geometry of the underlying manifold. It captures the low dimensional structure by preserving the intrinsic geometry of the data. ISOMAP provides more credible embedding structure than other manifold learning algorithms such as LLE [5][6]. Although ISOMAP has become one of the most popular global manifold learning methods in many dimension reduction tasks, it has limited its applicability in large-scale data sets due to its computational complexity of O(n3) and space complexity ofO(n2), where n is the number of data points. To reduce the computational and memory complexity of ISOMAP, low-rank matrix approximation is usually employed. Nyström method [7][8][12]is an efficient technique to generate low-rank matrix approximations which has been widely used in a number of supervised/unsupervised learning tasks. Nyström method approximates the eigen-decomposition of an nn matrix by using a much smaller matrix of mm. The mm matrix is formed by m landmark points from n data points. The most important aspect of Nyström method is sampling since different sampling schemes produce different approximation results. Random sampling is the most commonly used sampling techniques which can be simply implemented [13]. Advances in information Sciences and Service Sciences(AISS) 370 Volume4, Number12, July 2012 doi: 10.4156/AISS.vol4.issue12.42 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang Intuitively, more information the landmark points gives, more accurately nyström method approximates. In this paper, we extend an incremental sampling scheme [11] for Nyström based ISOMAP. Incremental sampling starts with two initial landmark points and selects new landmark points one by one such that the variance of the affinity matrix is minimized, until the expected number of landmark points is got. Our experiments also show that the efficiency of incremental sampling (IS) is higher than random sampling. The rest of the paper is organized as follows. In Section 2, we review the two stages of ISOMAP. In Section 3, we give a brief introduction of Nyström extension to ISOMAP. An incremental sampling scheme is also presented in Section 3. The Evaluation of Nyström based ISOMAP with incremental sampling is shown in Section 4 and we conclude in Section 5. 2. Review of ISOMAP Given n data pointsX {x1,x2,...,xn}, and xiRd, the goal of manifold learning is to find the intrinsic low embedding representationY {y1,y2,...,yn},where yiRk,kd. Isomap is a popular manifold learning technique, which aims to extract a low-dimensional data representation that best preserves all pairwise distances between input points, as measured by their geodesic distances along the manifold [1]. Isomap is an adaptation of Classical Multidimensional Scaling [2] by replacing Euclidean distances with geodesic distances. Figure1 shows that the Euclidean distance may not accurately reflect the intrinsic similarity of a pair of points on the manifold and consequently leads to incorrect intrinsic embedding. The Euclidean distance between circle points in Figure 1(a) is small in the three-dimensional input space while their geodesic distance on an intrinsic two-dimensional manifold (Figure1.(b)) is large. This problem can be remedied by using geodesic distance. (a) Input Space (b) Embedding Space Figure1. The Swiss Roll Data Set The detail of Isomap is as follows[1]. {x,x ,...,x } Input: N data points 1 2 N in the input space X {y,y ,...,y } Output: coordinate vectors 1 2 N in a d-dimensional Euclidean space Y . D (i,j) Step1: Compute geodesic distances G that are associated with the sum of edge weights along shortest paths between all pairs of points.  pth Step2: Construct d-dimensional embedding. Let p be the eigenvalue (in decreasing order) of the matrix W HD2H /2whereH InenenT , en n1/2[1,1,...,1]T Rn, and vip be the ith component of the pth eigenvector. Then set the pth component of the d-dimensional y vi coordinate vector i equal to p p . 371 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang In step2, e is the uniform vector of unit length, and it will center W . So the affinity matrix W can be considered as a “kernel”. In the continuum limit of a smooth manifold, the geodesic distance between data points is proportional to the Euclidean distance between the points the ||xx|| embedding space[9]. It is known that is conditionally positive definite for02[10], so in the continuum limit, D2 will be positive definite, and it gives the theoretical guarantee for that W is positive definite and W is a kernel. O(n2logn) O(n3) Isomap takes time to calculate the geodesic distances in the first step and time for eigen-decomposition for the symmetric positive semidefinite(SPSD) matrix W HD2H /2 in the second step. It also requires O(n2) space for storing W. Obviously the time and space complexities is huge which limits the application of ISOMAP to large scale data sets. 3. The Nyström Extension to ISOMAP with Incremental Sampling 3.1 Terminology {x,x ,...,x } Suppose the selected m landmark point set is 1 2 m , then we can partition the affinity matrix W as  A B W    BT C (1) with ARmm, BRm(nm),CR(nm)(nm).Here A represents the affinities among the landmark data points and it is also a SPSD matrix, B contains the affinities from the landmark data points to the rest of the data points, and C contains the affinities between all of the remaining data. Normally,mn, so C is large. We define E as  A E  BT (2) Let  be eigenvalues of W and U be the associated eigenvectors of A, i.e. AUTU . The Nyström method uses, U and Eto approximate W as:  A B  W W EAET   (3) BT BTAB where A is the pseudoinverse of A .As shown in [9], when m increases, W will converge to W . Let  be the eigenvector of A. The Nyström method approximates the eigenvalues and the associated eigenvectors of W as n ( ) m m  U  U ( )1/2  (4) n BTU When A is definite, =1. The time complexity to get the decomposition of W is O(m3) and the time complexity of matrix multiplication is O(kmn). So to get the approximation of the top k eigenvalues and eigenvectors of W , O(m3kmn) the time complexity of the Nyström method is . 372 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang 3.2. Nyström Extension to ISOMAP To approximate the low-dimensional embeddings in Isomap with the nyström method, we compute 1 A H D 2H A as 2 m m m,where Dm is the matrix of the geodesic distance between all landmark points and Hm ImememT . The matrix B should be obtained from the geodesic distance between the landmark points and all other points. We adapt the following method used in L-ISOMAP[5] to x x approximate the affinity between a landmark point t and a non-landmark point i: 1 1 n W(x,x) (D 2 D 2) t i 2 ti n tj j1 (5) It is no longer needed to compute the affinities between the non-landmark points. 3.3 Incremental Sampling Scheme The sampling set is an important factor for the performance of the nyström method. To achieve better low embedding structure, landmark points should represent all the intrinsic attributes. We presented an incremental sampling based on variance to get landmark points. The sampling procedure constantly strengthens the representative power of the landmark points. Since the affinity matrix of ISOMAP can be seen as a “kernel”, ISOMAP and spectral clustering are both kernel method. We extend the incremental sampling method in [11] to ISOMAP as shown in Figure 2. Algorithm: Incremental Sampling Input: X {x1,x2,...,xn}: data sets; m:number of landmark points I {i,i ,..,i } Output: 1 2 m the indices of sampled points, the affinity matrix A of the sample points, the affinity matrix B between the sample points and remaining points 1: Randomly chooses 2 points from the X, and add indices into I. Set the current number of landmark points p as 2. 2: Calculate A the affinity matrix of the chosen points. 3: Calculate B the affinity matrix between the chosen points and remaining points as formula (5). 4:Calculate sR(np), the column variance of B 5:while |I| < m do i 6: Finds mins, add its index into I, which is noted as p1; min 7: Update p as p+1; 8: Update A the affinity matrix of the chosen points. 9: Update B the affinity matrix between the chosen points and remaining points as formula (5) such that BR(|I|(n|I|)) x 10: Update S to add the similarity between ip and other samples 11: CalculatesR(np), the column variance of B 12: end while Figure 2. Incremental sampling scheme 373 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang 4. Experimental Results 4.1 Evaluation Measures Manifold learning techniques typically transform the data to a low dimensional manifold such that Euclidean distance in the embedding space between any pair of points is meaningful. Since K-means clustering computes Euclidean distances between all pairs of points, we use K- means clustering to evaluate the presented sampling scheme with Random Sampling. To compare the clustering performance, we use normalized mutual information (NMI)[14] measure, which is an information theoretical criterion for the evaluation of clustering {C,C ,...,C } {C,C,...,C} algorithms. Given two clustering results, 1 2 c and 1 2 k of X (|X |n), let ni andni be the number of objects in cluster Ci and Ci separately. Let nst C C denote the number of objects that are in cluster s as well as in cluster t , then the normalized mutual information of  and  is nn c k log( st) s1 t1 nn NMI  s t n n (c n log s)(k nlog t ) s1 s n t1 t n (6) Given the true labelsof X and a clustering result, we have0NMI(,)1. When  equals, NMI(,)1.The larger the NMI is, the better the clustering performance is. To give a more accurate evaluation of the clustering performance, for a given number of landmark points, we repeat the experiment 50 times, and the averaged NMI is reported. 4.2 Results on Artificial Data Sets Figure 3. Four Artificial Datasets We first apply the Nyström with incremental sampling(IS) on four artificial data sets as shown in Figure 3. Figure 4 explains the clustering quality comparison of the two sampling schemes when we increase the number of samples. The comparing results show that IS performs better the four data sets than RS. And it also shows that the extended Nyström gives attractive approximation even with small number of landmark points. 374 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang (a) Three Circle (b) Half Circle and Two Points (c) Two Spirals (d) One Circle and Two Points Figure 4. NMI value of the Clustering Results on Artificial Datasets 4.3 Results on UCI Data Sets We also demonstrate the effectiveness IS on two UCI data sets as shown in Table 2. The datasets have known labels and are mainly used for classification tasks. Table 1. UCI datasets description Dataset Iris Wine Number of Instances 150 150 Number of Attributes 4 13 Number of Clusters 3 3 The clustering results on the UCI data are shown in Figure 5. It reveals that IS outperforms RS. But the NMI value in figure (b) is still not satisfying. It may be caused by the noise data. 375 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang (a) Sonar (b) Wine Figure 5. NMI value of the Clustering Results on UCI Datasets 4.4 Results on Image Segmentation In this section, we apply the sampling schemes to image segmentation. We use four images from the Berkeley Segmentation Dataset [16]. Every image contains 154401 pixels. We use the 2 χ2-distance described in[11] to measure the distance between two pixels. The -distance between pixels is: 1 L (h(q)h (q))2 2(i, j)  i j 2 h(q)h (q) q1 i j (7) h where i is histograms of point i and L is the number of colors considered. We apply the color quantization scheme described in[14] and compute the histogram of pixel i within a 7×7 2 pixel window in our experimen. With -distance, the distance between pixels i and j is defined as D(i, j) 2(i, j) (8) To compare the performance of the presented incremental sampling with random sampling, F-measure is used to evaluate the image segmentation results [18]. Every image in the Berkeley Segmentation Dataset has several human segmentations. We use the human segmentations as standard results and compute the F-measure with regard to the human segmentations. Table 3shows the F-measure values of IS and RS on the four images. In our experiment the number of samples is set to 50. The results show that IS gives the better F-measure value than RS does. Table 3. F-measure Image ID IS RS 12084 0.7104 0.6455 14037 0.8362 0.7898 69015 0.6215 0.6052 159008 0.7442 0.6923 5. Conclusion In this paper, we have presented an incremental sampling scheme for Nyström method to get the low-rank matrix approximation. Experimental results of ISOMAP coupling with Nyström approximation have shown that the proposed sampling scheme performs better than random sampling. In the future, we will exploit the error analysis of the approximation in manifold learning and extend the incremental sampling scheme to other kernel manifold learning techniques. 376 ISOMAP using Nyström Method with Incremental Sampling Hong Yu, Xiaowei Zhao, Xianchao Zhang,Yuansheng Yang 6. References [1] T. F. Cox, M. A. A. Cox, and T. F. Cox, “Multidimensional Scaling”, Second Edition. Chapman & Hall/CRC, 2000. [2] J. B. Tenenbaum, V. de Silva, J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction”, Science,vol.290,no.5500, pp. 2319-2323,2000. [3] Xu Huijian, Guo Feipeng, “Combining Rough Set and Principal Component Analysis for Preprocessing on Commercial Data Stream”, JCIT: Journal of Convergence Information Technology, Vol. 7, No. 2, pp. 132- 140, 2012. [4] Zhi-Sheng Gao, Chun-Zhi Xie, “Fast Face Recognition Algorithm Based on Compact Local Descriptor”, AISS: Advances in Information Sciences and Service Sciences, Vol. 3, No. 10, pp. 281- 289, 2011 [5] Roweis, S. T.Saul, L. K., “Nonlinear Dimensionality Reduction by Locally Linear Embedding”, Science, vol.290,no.5500,pp.2323-2326,2000. [6] V. de Silva and J. B. Tenenbaum, “Global versus local methods in nonlinear dimensionality reduction”, Nero Information Processing Systems, vol.15, pp.705-712, 2003. [7] Hong Yu, He Jiang, Xianchao Zhang,Yuansheng Yang,“K_Neighbors Path Based Spectral Clustering”, International Journal of Advancements in Computing Technology, vol.4,no.1,pp.50- 58,2012. [8] C. K. I.Williams , M. Seeger, “Using the Nyström method to speed up kernel machines”, NIPS, pp. 682–688, 2000. [9] P. Drineas, M. W. Mahoney, “On the Nyström method for approximating a Gram matrix for improved kernel-based learning”, Journal of Machine Learning, no.6, pp. 2153-2175 ,2005. [10] C. Grimes, D. L. Donoho, “When does isomap recover the natural parametrization of families of articulated images”, Technical Report 27, Stanford University, 2002. [11] B. Sch¨olkopf, A. J. Smola, “Learning with Kernels”, MIT Press, Cambridge, MA, 2002. [12] Xianchao Zhang, Quanzeng You, “Clusterability Analysis and Incremental Sampling for Nyström Extension Based Spectral Clustering”, In 2011 IEEE 11th International Conference on Data Mining, pp.942-951,2011. [13] Ameet Talwalkar, Sanjiv Kumar, Henry Rowley, “ Large-scale manifold learning”, In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.1-8,2008. [14] S. Kumar, M. Mohri, A. Talwalkar, “Sampling techniques for the nyström method”, Journal of Machine Learning Research, vol. 5, pp. 304-311, 2009. [15] A. Strehl , J. Ghosh, “Cluster ensembles a knowledge reuse framework for combining multiple partitions”, Journal of Machine Learning Research,vol.3,no.3,pp. 583-617,2003. [16] D. Martin, C. Fowlkes, D. Tal, J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics”, ICCV, vol. 2, pp. 416–423,2001. [17] J. Puzicha and S. Belongie, “Model-based half toning for color image segmentation”, ICPR, vol. 3, pp. 629-632,2000. [18] F. Estrada and A. Jepson, “Benchmarking image segmentation algorithms”, International Journal of Computer Vision, vol. 85, no. 2, pp. 167-181, 2009. 377

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.