Gradient Based Clustering
- URL: http://arxiv.org/abs/2202.00720v1
- Date: Tue, 1 Feb 2022 19:31:15 GMT
- Title: Gradient Based Clustering
- Authors: Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
- Abstract summary: We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality.
The approach is an iterative two step procedure (alternating between cluster assignment and cluster center updates) and is applicable to a wide range of functions.
- Score: 72.15857783681658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a general approach for distance based clustering, using the
gradient of the cost function that measures clustering quality with respect to
cluster assignments and cluster center positions. The approach is an iterative
two step procedure (alternating between cluster assignment and cluster center
updates) and is applicable to a wide range of functions, satisfying some mild
assumptions. The main advantage of the proposed approach is a simple and
computationally cheap update rule. Unlike previous methods that specialize to a
specific formulation of the clustering problem, our approach is applicable to a
wide range of costs, including non-Bregman clustering methods based on the
Huber loss. We analyze the convergence of the proposed algorithm, and show that
it converges to the set of appropriately defined fixed points, under arbitrary
center initialization. In the special case of Bregman cost functions, the
algorithm converges to the set of centroidal Voronoi partitions, which is
consistent with prior works. Numerical experiments on real data demonstrate the
effectiveness of the proposed method.
Related papers
- Anchor-free Clustering based on Anchor Graph Factorization [17.218481911995365]
We introduce a novel method termed Anchor-free Clustering based on Anchor Graph Factorization (AFCAGF)
AFCAGF innovates in learning the anchor graph, requiring only the computation of pairwise distances between samples.
We evolve the concept of the membership matrix between cluster centers and samples in FKM into an anchor graph encompassing multiple anchor points and samples.
arXiv Detail & Related papers (2024-02-24T02:16:42Z) - Deep Embedding Clustering Driven by Sample Stability [16.53706617383543]
We propose a deep embedding clustering algorithm driven by sample stability (DECS)
Specifically, we start by constructing the initial feature space with an autoencoder and then learn the cluster-oriented embedding feature constrained by sample stability.
The experimental results on five datasets illustrate that the proposed method achieves superior performance compared to state-of-the-art clustering approaches.
arXiv Detail & Related papers (2024-01-29T09:19:49Z) - Robust and Automatic Data Clustering: Dirichlet Process meets
Median-of-Means [18.3248037914529]
We present an efficient and automatic clustering technique by integrating the principles of model-based and centroid-based methodologies.
Statistical guarantees on the upper bound of clustering error suggest the advantages of our proposed method over existing state-of-the-art clustering algorithms.
arXiv Detail & Related papers (2023-11-26T19:01:15Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Strong Consistency for a Class of Adaptive Clustering Procedures [0.0]
We show that all clustering procedures in this class are strongly consistent under IID samples.
In the adaptive setting, our work provides a strong consistency result that is the first of its kind.
arXiv Detail & Related papers (2022-02-27T18:56:41Z) - Weighted Sparse Subspace Representation: A Unified Framework for
Subspace Clustering, Constrained Clustering, and Active Learning [0.3553493344868413]
We first propose a novel spectral-based subspace clustering algorithm that seeks to represent each point as a sparse convex combination of a few nearby points.
We then extend the algorithm to constrained clustering and active learning settings.
Our motivation for developing such a framework stems from the fact that typically either a small amount of labelled data is available in advance; or it is possible to label some points at a cost.
arXiv Detail & Related papers (2021-06-08T13:39:43Z) - Determinantal consensus clustering [77.34726150561087]
We propose the use of determinantal point processes or DPP for the random restart of clustering algorithms.
DPPs favor diversity of the center points within subsets.
We show through simulations that, contrary to DPP, this technique fails both to ensure diversity, and to obtain a good coverage of all data facets.
arXiv Detail & Related papers (2021-02-07T23:48:24Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.