kmeans package:stats R Documentation _K-_M_e_a_n_s _C_l_u_s_t_e_r_i_n_g _D_e_s_c_r_i_p_t_i_o_n: Perform k-means clustering on a data matrix. _U_s_a_g_e: kmeans(x, centers, iter.max = 10, nstart = 1, algorithm = c("Hartigan-Wong", "Lloyd", "Forgy", "MacQueen")) _A_r_g_u_m_e_n_t_s: x: A numeric matrix of data, or an object that can be coerced to such a matrix (such as a numeric vector or a data frame with all numeric columns). centers: Either the number of clusters or a set of initial (distinct) cluster centres. If a number, a random set of (distinct) rows in 'x' is chosen as the initial centres. iter.max: The maximum number of iterations allowed. nstart: If 'centers' is a number, how many random sets should be chosen? algorithm: character: may be abbreviated. _D_e_t_a_i_l_s: The data given by 'x' is clustered by the k-means method, which aims to partition the points into k groups such that the sum of squares from points to the assigned cluster centres is minimized. At the minimum, all cluster centres are at the mean of their Voronoi sets (the set of data points which are nearest to the cluster centre). The algorithm of Hartigan and Wong (1979) is used by default. Note that some authors use k-means to refer to a specific algorithm rather than the general method: most commonly the algorithm given by MacQueen (1967) but sometimes that given by Lloyd (1957) and Forgy (1965). The Hartigan-Wong algorithm generally does a better job than either of those, but trying several random starts is often recommended. Except for the Lloyd-Forgy method, k clusters will always be returned if a number is specified. If an initial matrix of centres is supplied, it is possible that no point will be closest to one or more centres, which is currently an error for the Hartigan-Wong method. _V_a_l_u_e: An object of class '"kmeans"' which is a list with components: cluster: A vector of integers indicating the cluster to which each point is allocated. centers: A matrix of cluster centres. withinss: The within-cluster sum of squares for each cluster. size: The number of points in each cluster. There is a 'print' method for this class. _R_e_f_e_r_e_n_c_e_s: Forgy, E. W. (1965) Cluster analysis of multivariate data: efficiency vs interpretability of classifications. _Biometrics_ *21*, 768-769. Hartigan, J. A. and Wong, M. A. (1979). A K-means clustering algorithm. _Applied Statistics_ *28*, 100-108. Lloyd, S. P. (1957, 1982) Least squares quantization in PCM. Technical Note, Bell Laboratories. Published in 1982 in _IEEE Transactions on Information Theory_ *28*, 128-137. MacQueen, J. (1967) Some methods for classification and analysis of multivariate observations. In _Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability_, eds L. M. Le Cam & J. Neyman, *1*, pp. 281-297. Berkeley, CA: University of California Press. _E_x_a_m_p_l_e_s: require(graphics) # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex=2) ## random starts do help here with too many clusters (cl <- kmeans(x, 5, nstart = 25)) plot(x, col = cl$cluster) points(cl$centers, col = 1:5, pch = 8)