Can Semantic Labels Assist Self-Supervised Visual Representation
Learning?
- URL: http://arxiv.org/abs/2011.08621v1
- Date: Tue, 17 Nov 2020 13:25:00 GMT
- Title: Can Semantic Labels Assist Self-Supervised Visual Representation
Learning?
- Authors: Longhui Wei, Lingxi Xie, Jianzhong He, Jianlong Chang, Xiaopeng Zhang,
Wengang Zhou, Houqiang Li, Qi Tian
- Abstract summary: We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
- Score: 194.1681088693248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, contrastive learning has largely advanced the progress of
unsupervised visual representation learning. Pre-trained on ImageNet, some
self-supervised algorithms reported higher transfer learning performance
compared to fully-supervised methods, seeming to deliver the message that human
labels hardly contribute to learning transferrable visual features. In this
paper, we defend the usefulness of semantic labels but point out that
fully-supervised and self-supervised methods are pursuing different kinds of
features. To alleviate this issue, we present a new algorithm named Supervised
Contrastive Adjustment in Neighborhood (SCAN) that maximally prevents the
semantic guidance from damaging the appearance feature embedding. In a series
of downstream tasks, SCAN achieves superior performance compared to previous
fully-supervised and self-supervised methods, and sometimes the gain is
significant. More importantly, our study reveals that semantic labels are
useful in assisting self-supervised methods, opening a new direction for the
community.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - On minimal variations for unsupervised representation learning [19.055611167696238]
Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks.
Revealing minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.
arXiv Detail & Related papers (2022-11-07T18:57:20Z) - Recent Advancements in Self-Supervised Paradigms for Visual Feature
Representation [0.41436032949434404]
Supervised learning requires a large amount of labeled data to reach state-of-the-art performance.
To avoid the cost of labeling data, self-supervised methods were proposed to make use of largely available unlabeled data.
This study conducts a comprehensive and insightful survey and analysis of recent developments in the self-supervised paradigm for feature representation.
arXiv Detail & Related papers (2021-11-03T07:02:34Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks [0.0]
Recent work on few-shot learning shows that quality of learned representations plays an important role in few-shot classification performance.
On the other hand, the goal of self-supervised learning is to recover useful semantic information of the data without the use of class labels.
We exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks.
arXiv Detail & Related papers (2021-01-24T23:21:43Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.