Representation Learning via Consistent Assignment of Views to Clusters
- URL: http://arxiv.org/abs/2112.15421v1
- Date: Fri, 31 Dec 2021 12:59:23 GMT
- Title: Representation Learning via Consistent Assignment of Views to Clusters
- Authors: Thalles Silva and Ad\'in Ram\'irez Rivera
- Abstract summary: Consistent Assignment for Representation Learning (CARL) is an unsupervised learning method to learn visual representations.
By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes.
Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce Consistent Assignment for Representation Learning (CARL), an
unsupervised learning method to learn visual representations by combining ideas
from self-supervised contrastive learning and deep clustering. By viewing
contrastive learning from a clustering perspective, CARL learns unsupervised
representations by learning a set of general prototypes that serve as energy
anchors to enforce different views of a given image to be assigned to the same
prototype. Unlike contemporary work on contrastive learning with deep
clustering, CARL proposes to learn the set of general prototypes in an online
fashion, using gradient descent without the necessity of using
non-differentiable algorithms or K-Means to solve the cluster assignment
problem. CARL surpasses its competitors in many representations learning
benchmarks, including linear evaluation, semi-supervised learning, and transfer
learning.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z) - Consensus Clustering With Unsupervised Representation Learning [4.164845768197489]
We study the clustering ability of Bootstrap Your Own Latent (BYOL) and observe that features learnt using BYOL may not be optimal for clustering.
We propose a novel consensus clustering based loss function, and train BYOL with the proposed loss in an end-to-end way that improves the clustering ability and outperforms similar clustering based methods.
arXiv Detail & Related papers (2020-10-03T01:16:46Z) - Clustering based Contrastive Learning for Improving Face Representations [34.75646290505793]
We present Clustering-based Contrastive Learning (CCL), a new clustering-based representation learning approach.
CCL uses labels obtained from clustering along with video constraints to learn discnative face features.
arXiv Detail & Related papers (2020-04-05T13:11:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.