K-means algorithm

The k-means algorithm is an algorithm to cluster objects based on attributes into k partitions. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data. It assumes that the object attributes form a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function


 * $$V = \sum_{i=1}^{k} \sum_{x_j \in S_i} |x_j - \mu_i |^2 $$

where there are $$k$$ clusters $$S_i$$, $$i = 1, 2, ..., k$$ and $$\mu_i$$ is the centroid or mean point of all the points $$x_j \in S_i$$.

The most common form of the algorithm uses an iterative refinement heuristic known as Lloyd's algorithm. Lloyd's algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and algorithm repeated by alternate application of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed).

Lloyd's algorithm and k-means are often used synonymously, but in reality Lloyd's algorithm is a heuristic for solving the k-means problem, but with certain combinations of starting points and centroids, Lloyd's algorithm can in fact converge to the wrong answer (A different and optimal answer to the minimization function above exists.)

Other variations exist, but Lloyd's algorithm has remained popular because it converges extremely quickly in practice. In fact, many have observed that the number of iterations is typically much less than the number of points. Recently, however, David Arthur and Sergei Vassilvitskii showed that there exist certain point sets on which k-means takes superpolynomial time: $$2^{\Omega(\sqrt{n})}$$ to converge. Approximate k-means algorithms have been designed that make use of coresets: small subsets of the original data.

In terms of performance the algorithm is not guaranteed to return a global optimum. The quality of the final solution depends largely on the initial set of clusters, and may, in practice, be much poorer than the global optimum. Since the algorithm is extremely fast, a common method is to run the algorithm several times and return the best clustering found.

Another main drawback of the algorithm is that it has to be told the number of clusters (i.e. $$k$$) to find. If the data is not naturally clustered, you get some strange results. Also, the algorithm works well only when spherical clusters are naturally available in data.

Demonstration of the algorithm
The following images demonstrate the k-means clustering algorithm in action, for the two-dimensional case. The initial centers are generated randomly to demonstrate the stages in more detail.

Relation to PCA
It has been shown recently that the relaxed solution of K-means clustering, specified by the cluster indicators, are given by the PCA (principal component analysis) principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace specified by the between-class scatter matrix.

Enhancements
In 2006 a new way of choosing the initial centers was proposed in a paper . The idea is to select centers in a way that they are already initially close to large quantities of points. The authors use $$L^2$$ norm in selecting the centers, but general $$L^n$$ may be used to tune the aggressiveness of the seeding.

This seeding method gives out considerable improvements in the final error of k-means. Although the initial selection in the algorithm takes considerable time, the k-means itself converges very fast after this seeding and thus the seeding actually lowers the computation time too. The authors tested their method with real and synthetic datasets and obtained typically 2-fold to 10-fold improvements in speed, and for certain datasets close to 1000-fold improvements in error. Their tests almost always showed the new method to be at least as good as vanilla k-means in both speed and error.

Additionally, the authors calculate an approximation ratio for their algorithm. This is something that has not been done with vanilla k-means (although with several variations of it). The k-means++ guarantees to have approximation ratio $$O(\log(k))$$ where $$k$$ is the number of clusters used.

Variations
The set of squared error minimizing cluster functions also includes the K-medoids algorithm, an approach which forces the center point of each cluster to be one of the actual points.