CCFC: Bridging Federated Clustering and Contrastive Learning
- URL: http://arxiv.org/abs/2401.06634v1
- Date: Fri, 12 Jan 2024 15:26:44 GMT
- Title: CCFC: Bridging Federated Clustering and Contrastive Learning
- Authors: Jie Yan, Jing Liu and Zhong-Yuan Zhang
- Abstract summary: We propose a new federated clustering method named cluster-contrastive federated clustering (CCFC)
CCFC shows superior performance in handling device failures from a practical viewpoint.
- Score: 9.91610928326645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated clustering, an essential extension of centralized clustering for
federated scenarios, enables multiple data-holding clients to collaboratively
group data while keeping their data locally. In centralized scenarios,
clustering driven by representation learning has made significant advancements
in handling high-dimensional complex data. However, the combination of
federated clustering and representation learning remains underexplored. To
bridge this, we first tailor a cluster-contrastive model for learning
clustering-friendly representations. Then, we harness this model as the
foundation for proposing a new federated clustering method, named
cluster-contrastive federated clustering (CCFC). Benefiting from representation
learning, the clustering performance of CCFC even double those of the best
baseline methods in some cases. Compared to the most related baseline, the
benefit results in substantial NMI score improvements of up to 0.4155 on the
most conspicuous case. Moreover, CCFC also shows superior performance in
handling device failures from a practical viewpoint.
Related papers
- Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number [12.926206811876174]
We introduce a novel self-supervised deep clustering approach tailored for unstructured data, termed Adaptive Self-supervised Robust Clustering (ASRC)
ASRC adaptively learns the graph structure and edge weights to capture both local and global structural information.
ASRC even outperforms methods that rely on prior knowledge of the number of clusters, highlighting its effectiveness in addressing the challenges of clustering unstructured data.
arXiv Detail & Related papers (2024-07-29T15:51:09Z) - Dynamically Weighted Federated k-Means [0.0]
Federated clustering enables multiple data sources to collaboratively cluster their data, maintaining decentralization and preserving privacy.
We introduce a novel federated clustering algorithm named Dynamically Weighted Federated k-means (DWF k-means) based on Lloyd's method for k-means clustering.
We conduct experiments on multiple datasets and data distribution settings to evaluate the performance of our algorithm in terms of clustering score, accuracy, and v-measure.
arXiv Detail & Related papers (2023-10-23T12:28:21Z) - DivClust: Controlling Diversity in Deep Clustering [47.85350249697335]
DivClust produces consensus clustering solutions that consistently outperform single-clustering baselines.
Our method effectively controls diversity across frameworks and datasets with very small additional computational cost.
arXiv Detail & Related papers (2023-04-03T14:45:43Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - DeepCluE: Enhanced Image Clustering via Multi-layer Ensembles in Deep
Neural Networks [53.88811980967342]
This paper presents a Deep Clustering via Ensembles (DeepCluE) approach.
It bridges the gap between deep clustering and ensemble clustering by harnessing the power of multiple layers in deep neural networks.
Experimental results on six image datasets confirm the advantages of DeepCluE over the state-of-the-art deep clustering approaches.
arXiv Detail & Related papers (2022-06-01T09:51:38Z) - Learning Statistical Representation with Joint Deep Embedded Clustering [2.1267423178232407]
StatDEC is an unsupervised framework for joint statistical representation learning and clustering.
Our experiments show that using these representations, one can considerably improve results on imbalanced image clustering across a variety of image datasets.
arXiv Detail & Related papers (2021-09-11T09:26:52Z) - Very Compact Clusters with Structural Regularization via Similarity and
Connectivity [3.779514860341336]
We propose an end-to-end deep clustering algorithm, i.e., Very Compact Clusters (VCC) for the general datasets.
Our proposed approach achieves better clustering performance over most of the state-of-the-art clustering methods.
arXiv Detail & Related papers (2021-06-09T23:22:03Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z) - Contrastive Clustering [57.71729650297379]
We propose Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.
In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline.
arXiv Detail & Related papers (2020-09-21T08:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.