Demystifying Contrastive Self-Supervised Learning: Invariances,
Augmentations and Dataset Biases
- URL: http://arxiv.org/abs/2007.13916v2
- Date: Wed, 29 Jul 2020 05:38:11 GMT
- Title: Demystifying Contrastive Self-Supervised Learning: Invariances,
Augmentations and Dataset Biases
- Authors: Senthil Purushwalkam, Abhinav Gupta
- Abstract summary: Recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class.
We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations.
Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet.
- Score: 34.02639091680309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised representation learning approaches have recently surpassed
their supervised learning counterparts on downstream tasks like object
detection and image classification. Somewhat mysteriously the recent gains in
performance come from training instance classification models, treating each
image and it's augmented versions as samples of a single class. In this work,
we first present quantitative experiments to demystify these gains. We
demonstrate that approaches like MOCO and PIRL learn occlusion-invariant
representations. However, they fail to capture viewpoint and category instance
invariance which are crucial components for object recognition. Second, we
demonstrate that these approaches obtain further gains from access to a clean
object-centric training dataset like Imagenet. Finally, we propose an approach
to leverage unstructured videos to learn representations that possess higher
viewpoint invariance. Our results show that the learned representations
outperform MOCOv2 trained on the same data in terms of invariances encoded and
the performance on downstream image classification and semantic segmentation
tasks.
Related papers
- LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - CIPER: Combining Invariant and Equivariant Representations Using
Contrastive and Predictive Learning [6.117084972237769]
We introduce Contrastive Invariant and Predictive Equivariant Representation learning (CIPER)
CIPER comprises both invariant and equivariant learning objectives using one shared encoder and two different output heads on top of the encoder.
We evaluate our method on static image tasks and time-augmented image datasets.
arXiv Detail & Related papers (2023-02-05T07:50:46Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - MixSiam: A Mixture-based Approach to Self-supervised Representation
Learning [33.52892899982186]
Recently contrastive learning has shown significant progress in learning visual representations from unlabeled data.
We propose MixSiam, a mixture-based approach upon the traditional siamese network.
arXiv Detail & Related papers (2021-11-04T08:12:47Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - CoCon: Cooperative-Contrastive Learning [52.342936645996765]
Self-supervised visual representation learning is key for efficient video analysis.
Recent success in learning image representations suggests contrastive learning is a promising framework to tackle this challenge.
We introduce a cooperative variant of contrastive learning to utilize complementary information across views.
arXiv Detail & Related papers (2021-04-30T05:46:02Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.