Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation
- URL: http://arxiv.org/abs/2011.11765v2
- Date: Sun, 2 Jan 2022 10:45:31 GMT
- Title: Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation
- Authors: Tri Huynh, Simon Kornblith, Matthew R. Walter, Michael Maire, Maryam
Khademi
- Abstract summary: A fundamental problem in contrastive learning is mitigating the effects of false negatives.
We propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect.
Our method exhibits consistent improvements over existing contrastive learning-based methods.
- Score: 40.71224235172881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised representation learning has made significant leaps fueled by
progress in contrastive learning, which seeks to learn transformations that
embed positive input pairs nearby, while pushing negative pairs far apart.
While positive pairs can be generated reliably (e.g., as different views of the
same image), it is difficult to accurately establish negative pairs, defined as
samples from different images regardless of their semantic content or visual
features. A fundamental problem in contrastive learning is mitigating the
effects of false negatives. Contrasting false negatives induces two critical
issues in representation learning: discarding semantic information and slow
convergence. In this paper, we propose novel approaches to identify false
negatives, as well as two strategies to mitigate their effect, i.e. false
negative elimination and attraction, while systematically performing rigorous
evaluations to study this problem in detail. Our method exhibits consistent
improvements over existing contrastive learning-based methods. Without labels,
we identify false negatives with 40% accuracy among 1000 semantic classes on
ImageNet, and achieve 5.8% absolute improvement in top-1 accuracy over the
previous state-of-the-art when finetuning with 1% labels. Our code is available
at https://github.com/google-research/fnc.
Related papers
- Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image
Translation [12.754320302262533]
We introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches.
The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably.
arXiv Detail & Related papers (2022-04-23T08:31:18Z) - Robust Contrastive Learning Using Negative Samples with Diminished
Semantics [23.38896719740166]
We show that by generating carefully designed negative samples, contrastive learning can learn more robust representations.
We develop two methods, texture-based and patch-based augmentations, to generate negative samples.
We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes.
arXiv Detail & Related papers (2021-10-27T05:38:00Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations [87.72779294717267]
Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
arXiv Detail & Related papers (2021-04-29T17:56:08Z) - ISD: Self-Supervised Learning by Iterative Similarity Distillation [39.60300771234578]
We introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.
Our method achieves better results compared to state-of-the-art models like BYOL and MoCo on transfer learning settings.
arXiv Detail & Related papers (2020-12-16T20:50:17Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.