Conditional Negative Sampling for Contrastive Learning of Visual
Representations
- URL: http://arxiv.org/abs/2010.02037v1
- Date: Mon, 5 Oct 2020 14:17:32 GMT
- Title: Conditional Negative Sampling for Contrastive Learning of Visual
Representations
- Authors: Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
- Abstract summary: We show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations.
We introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE.
- Score: 19.136685699971864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods for learning unsupervised visual representations, dubbed
contrastive learning, optimize the noise-contrastive estimation (NCE) bound on
mutual information between two views of an image. NCE uses randomly sampled
negative examples to normalize the objective. In this paper, we show that
choosing difficult negatives, or those more similar to the current instance,
can yield stronger representations. To do this, we introduce a family of mutual
information estimators that sample negatives conditionally -- in a "ring"
around each positive. We prove that these estimators lower-bound mutual
information, with higher bias but lower variance than NCE. Experimentally, we
find our approach, applied on top of existing models (IR, CMC, and MoCo)
improves accuracy by 2-5% points in each case, measured by linear evaluation on
four standard image datasets. Moreover, we find continued benefits when
transferring features to a variety of new image distributions from the
Meta-Dataset collection and to a variety of downstream tasks such as object
detection, instance segmentation, and keypoint detection.
Related papers
- Synthetic Hard Negative Samples for Contrastive Learning [8.776888865665024]
This paper proposes a novel feature-level method, namely sampling synthetic hard negative samples for contrastive learning (SSCL)
We generate more and harder negative samples by mixing negative samples, and then sample them by controlling the contrast of anchor sample with the other negative samples.
Our proposed method improves the classification performance on different image datasets and can be readily integrated into existing methods.
arXiv Detail & Related papers (2023-04-06T09:54:35Z) - Modulated Contrast for Versatile Image Synthesis [60.304183493234376]
MoNCE is a versatile metric that introduces image contrast to learn a calibrated metric for the perception of multifaceted inter-image distances.
We introduce optimal transport in MoNCE to modulate the pushing force of negative samples collaboratively across multiple contrastive objectives.
arXiv Detail & Related papers (2022-03-17T14:03:46Z) - Investigating the Role of Negatives in Contrastive Representation
Learning [59.30700308648194]
Noise contrastive learning is a popular technique for unsupervised representation learning.
We focus on disambiguating the role of one of these parameters: the number of negative examples.
We find that the results broadly agree with our theory, while our vision experiments are murkier with performance sometimes even being insensitive to the number of negatives.
arXiv Detail & Related papers (2021-06-18T06:44:16Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - On Mutual Information in Contrastive Learning for Visual Representations [19.136685699971864]
unsupervised, "contrastive" learning algorithms in vision have been shown to learn representations that perform remarkably well on transfer tasks.
We show that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image.
We find that the choice of negative samples and views are critical to the success of these algorithms.
arXiv Detail & Related papers (2020-05-27T04:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.