Unsupervised Visual Representation Learning by Synchronous Momentum
Grouping
- URL: http://arxiv.org/abs/2207.06167v1
- Date: Wed, 13 Jul 2022 13:04:15 GMT
- Title: Unsupervised Visual Representation Learning by Synchronous Momentum
Grouping
- Authors: Bo Pang, Yifan Zhang, Yaoyi Li, Jia Cai, Cewu Lu
- Abstract summary: Group-level contrastive visual representation learning method on ImageNet surpasses vanilla supervised learning.
We conduct exhaustive experiments to show that SMoG has surpassed the current SOTA unsupervised representation learning methods.
- Score: 47.48803765951601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a genuine group-level contrastive visual
representation learning method whose linear evaluation performance on ImageNet
surpasses the vanilla supervised learning. Two mainstream unsupervised learning
schemes are the instance-level contrastive framework and clustering-based
schemes. The former adopts the extremely fine-grained instance-level
discrimination whose supervisory signal is not efficient due to the false
negatives. Though the latter solves this, they commonly come with some
restrictions affecting the performance. To integrate their advantages, we
design the SMoG method. SMoG follows the framework of contrastive learning but
replaces the contrastive unit from instance to group, mimicking
clustering-based methods. To achieve this, we propose the momentum grouping
scheme which synchronously conducts feature grouping with representation
learning. In this way, SMoG solves the problem of supervisory signal hysteresis
which the clustering-based method usually faces, and reduces the false
negatives of instance contrastive methods. We conduct exhaustive experiments to
show that SMoG works well on both CNN and Transformer backbones. Results prove
that SMoG has surpassed the current SOTA unsupervised representation learning
methods. Moreover, its linear evaluation results surpass the performances
obtained by vanilla supervised learning and the representation can be well
transferred to downstream tasks.
Related papers
- CARL-G: Clustering-Accelerated Representation Learning on Graphs [18.763104937800215]
We propose a novel clustering-based framework for graph representation learning that uses a loss inspired by Cluster Validation Indices (CVIs)
CARL-G is adaptable to different clustering methods and CVIs, and we show that with the right choice of clustering method and CVI, CARL-G outperforms node classification baselines on 4/5 datasets with up to a 79x training speedup compared to the best-performing baseline.
arXiv Detail & Related papers (2023-06-12T08:14:42Z) - Clustering-Aware Negative Sampling for Unsupervised Sentence
Representation [24.15096466098421]
ClusterNS is a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning.
We apply a modified K-means clustering algorithm to supply hard negatives and recognize in-batch false negatives during training.
arXiv Detail & Related papers (2023-05-17T02:06:47Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Joint Debiased Representation and Image Clustering Learning with
Self-Supervision [3.1806743741013657]
We develop a novel joint clustering and contrastive learning framework.
We adapt the debiased contrastive loss to avoid under-clustering minority classes of imbalanced datasets.
arXiv Detail & Related papers (2022-09-14T21:23:41Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Clustering by Maximizing Mutual Information Across Views [62.21716612888669]
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
Our method significantly outperforms state-of-the-art single-stage clustering methods across a variety of image datasets.
arXiv Detail & Related papers (2021-07-24T15:36:49Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.