Do Self-Supervised and Supervised Methods Learn Similar Visual
Representations?
- URL: http://arxiv.org/abs/2110.00528v1
- Date: Fri, 1 Oct 2021 16:51:29 GMT
- Title: Do Self-Supervised and Supervised Methods Learn Similar Visual
Representations?
- Authors: Tom George Grigg, Dan Busbridge, Jason Ramapuram, Russ Webb
- Abstract summary: We compare a constrastive self-supervised algorithm (SimCLR) to supervision for simple image data in a common architecture.
We find that the methods learn similar intermediate representations through dissimilar means, and that the representations diverge rapidly in the final few layers.
Our work particularly highlights the importance of the learned intermediate representations, and raises important questions for auxiliary task design.
- Score: 3.1594831736896025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of a number of recent techniques for visual
self-supervised deep learning, there remains limited investigation into the
representations that are ultimately learned. By using recent advances in
comparing neural representations, we explore in this direction by comparing a
constrastive self-supervised algorithm (SimCLR) to supervision for simple image
data in a common architecture. We find that the methods learn similar
intermediate representations through dissimilar means, and that the
representations diverge rapidly in the final few layers. We investigate this
divergence, finding that it is caused by these layers strongly fitting to the
distinct learning objectives. We also find that SimCLR's objective implicitly
fits the supervised objective in intermediate layers, but that the reverse is
not true. Our work particularly highlights the importance of the learned
intermediate representations, and raises important questions for auxiliary task
design.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - From Patches to Objects: Exploiting Spatial Reasoning for Better Visual
Representations [2.363388546004777]
We propose a novel auxiliary pretraining method that is based on spatial reasoning.
Our proposed method takes advantage of a more flexible formulation of contrastive learning by introducing spatial reasoning as an auxiliary task for discriminative self-supervised methods.
arXiv Detail & Related papers (2023-05-21T07:46:46Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - Similarity Analysis of Self-Supervised Speech Representations [44.33287205296597]
We quantify the similarities between different self-supervised representations using existing similarity measures.
We also design probing tasks to study the correlation between the models' pre-training loss and the amount of specific speech information contained in their learned representations.
arXiv Detail & Related papers (2020-10-22T07:02:21Z) - Anatomy of Catastrophic Forgetting: Hidden Representations and Task
Semantics [24.57617154267565]
We investigate how forgetting affects representations in neural network models.
We find that deeper layers are disproportionately the source of forgetting.
We also introduce a novel CIFAR-100 based task approximating realistic input distribution shift.
arXiv Detail & Related papers (2020-07-14T23:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.