With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations
- URL: http://arxiv.org/abs/2104.14548v1
- Date: Thu, 29 Apr 2021 17:56:08 GMT
- Title: With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations
- Authors: Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet,
Andrew Zisserman
- Abstract summary: Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
- Score: 87.72779294717267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning algorithms based on instance discrimination train
encoders to be invariant to pre-defined transformations of the same instance.
While most methods treat different views of the same image as positives for a
contrastive loss, we are interested in using positives from other instances in
the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual
Representations (NNCLR), samples the nearest neighbors from the dataset in the
latent space, and treats them as positives. This provides more semantic
variations than pre-defined transformations.
We find that using the nearest-neighbor as positive in contrastive losses
improves performance significantly on ImageNet classification, from 71.7% to
75.6%, outperforming previous state-of-the-art methods. On semi-supervised
learning benchmarks we improve performance significantly when only 1% ImageNet
labels are available, from 53.8% to 56.5%. On transfer learning benchmarks our
method outperforms state-of-the-art methods (including supervised learning with
ImageNet) on 8 out of 12 downstream datasets. Furthermore, we demonstrate
empirically that our method is less reliant on complex data augmentations. We
see a relative reduction of only 2.1% ImageNet Top-1 accuracy when we train
using only random crops.
Related papers
- Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Asymmetric Patch Sampling for Contrastive Learning [17.922853312470398]
Asymmetric appearance between positive pair effectively reduces the risk of representation degradation in contrastive learning.
We propose a novel asymmetric patch sampling strategy for contrastive learning, to boost the appearance asymmetry for better representations.
arXiv Detail & Related papers (2023-06-05T13:10:48Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Contrastive Learning with Stronger Augmentations [63.42057690741711]
We propose a general framework called Contrastive Learning with Stronger Augmentations(A) to complement current contrastive learning approaches.
Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries.
Experiments showed the information from the strongly augmented images can significantly boost the performance.
arXiv Detail & Related papers (2021-04-15T18:40:04Z) - Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation [40.71224235172881]
A fundamental problem in contrastive learning is mitigating the effects of false negatives.
We propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect.
Our method exhibits consistent improvements over existing contrastive learning-based methods.
arXiv Detail & Related papers (2020-11-23T22:17:21Z) - Dense Contrastive Learning for Self-Supervised Visual Pre-Training [102.15325936477362]
We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only 1% slower)
arXiv Detail & Related papers (2020-11-18T08:42:32Z) - Unsupervised Representation Learning by InvariancePropagation [34.53866045440319]
In this paper, we propose Invariance propagation to focus on learning representations invariant to category-level variations.
With a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels.
We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.
arXiv Detail & Related papers (2020-10-07T13:00:33Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.