Scalable Distributed Approximation of Internal Measures for Clustering
Evaluation
- URL: http://arxiv.org/abs/2003.01430v3
- Date: Wed, 20 Jan 2021 09:34:20 GMT
- Title: Scalable Distributed Approximation of Internal Measures for Clustering
Evaluation
- Authors: Federico Altieri, Andrea Pietracaprina, Geppino Pucci, Fabio Vandin
- Abstract summary: Internal measure for clustering evaluation is the silhouette coefficient, whose computation requires a quadratic number of distance calculations.
We present the first scalable algorithm to compute such a rigorous approximation for the evaluation of clusterings based on any metric distances.
We also prove that the algorithm can be adapted to obtain rigorous approximations of other internal measures of clustering quality, such as cohesion and separation.
- Score: 5.144809478361603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The most widely used internal measure for clustering evaluation is the
silhouette coefficient, whose naive computation requires a quadratic number of
distance calculations, which is clearly unfeasible for massive datasets.
Surprisingly, there are no known general methods to efficiently approximate the
silhouette coefficient of a clustering with rigorously provable high accuracy.
In this paper, we present the first scalable algorithm to compute such a
rigorous approximation for the evaluation of clusterings based on any metric
distances. Our algorithm hinges on a Probability Proportional to Size (PPS)
sampling scheme, and, for any fixed $\varepsilon, \delta \in (0,1)$, it
approximates the silhouette coefficient within a mere additive error
$O(\varepsilon)$ with probability $1-\delta$, using a very small number of
distance calculations. We also prove that the algorithm can be adapted to
obtain rigorous approximations of other internal measures of clustering
quality, such as cohesion and separation. Importantly, we provide a distributed
implementation of the algorithm using the MapReduce model, which runs in
constant rounds and requires only sublinear local space at each worker, which
makes our estimation approach applicable to big data scenarios. We perform an
extensive experimental evaluation of our silhouette approximation algorithm,
comparing its performance to a number of baseline heuristics on real and
synthetic datasets. The experiments provide evidence that, unlike other
heuristics, our estimation strategy not only provides tight theoretical
guarantees but is also able to return highly accurate estimations while running
in a fraction of the time required by the exact computation, and that its
distributed implementation is highly scalable, thus enabling the computation of
internal measures for very large datasets for which the exact computation is
prohibitive.
Related papers
- Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Statistical, Robustness, and Computational Guarantees for Sliced
Wasserstein Distances [18.9717974398864]
Sliced Wasserstein distances preserve properties of classic Wasserstein distances while being more scalable for computation and estimation in high dimensions.
We quantify this scalability from three key aspects: (i) empirical convergence rates; (ii) robustness to data contamination; and (iii) efficient computational methods.
arXiv Detail & Related papers (2022-10-17T15:04:51Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Manifold learning with approximate nearest neighbors [1.8477401359673706]
We use a broad range of approximate nearest neighbor algorithms within manifold learning algorithms and evaluate their impact on embedding accuracy.
Via a thorough empirical investigation based on the benchmark MNIST dataset, it is shown that approximate nearest neighbors lead to substantial improvements in computational time.
This application demonstrates how the proposed methods can be used to visualize and identify anomalies and uncover underlying structure within high-dimensional data.
arXiv Detail & Related papers (2021-02-22T12:04:23Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators
with Massive Data [20.79270369203348]
Existing methods mostly focus on subsampling with replacement due to its high computational efficiency.
We first derive optimal subsampling probabilities in the context of quasi-likelihood estimation.
We develop a distributed subsampling framework, in which statistics are computed simultaneously on smaller partitions of the full data.
arXiv Detail & Related papers (2020-05-21T02:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.