Synthetic Hard Negative Samples for Contrastive Learning
- URL: http://arxiv.org/abs/2304.02971v2
- Date: Mon, 17 Apr 2023 15:44:10 GMT
- Title: Synthetic Hard Negative Samples for Contrastive Learning
- Authors: Hengkui Dong, Xianzhong Long, Yun Li, Lei Chen
- Abstract summary: This paper proposes a novel feature-level method, namely sampling synthetic hard negative samples for contrastive learning (SSCL)
We generate more and harder negative samples by mixing negative samples, and then sample them by controlling the contrast of anchor sample with the other negative samples.
Our proposed method improves the classification performance on different image datasets and can be readily integrated into existing methods.
- Score: 8.776888865665024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has emerged as an essential approach for self-supervised
learning in visual representation learning. The central objective of
contrastive learning is to maximize the similarities between two augmented
versions of an image (positive pairs), while minimizing the similarities
between different images (negative pairs). Recent studies have demonstrated
that harder negative samples, i.e., those that are more difficult to
differentiate from the anchor sample, perform a more crucial function in
contrastive learning. This paper proposes a novel feature-level method, namely
sampling synthetic hard negative samples for contrastive learning (SSCL), to
exploit harder negative samples more effectively. Specifically, 1) we generate
more and harder negative samples by mixing negative samples, and then sample
them by controlling the contrast of anchor sample with the other negative
samples; 2) considering the possibility of false negative samples, we further
debias the negative samples. Our proposed method improves the classification
performance on different image datasets and can be readily integrated into
existing methods.
Related papers
- Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Rethinking Samples Selection for Contrastive Learning: Mining of
Potential Samples [5.586563813796839]
Contrastive learning predicts whether two images belong to the same category by training a model to make their feature representations as close or as far away as possible.
We take into account both positive and negative samples, and mining potential samples from two aspects.
Our method achieves 88.57%, 61.10%, and 36.69% top-1 accuracy on CIFAR10, CIFAR100, and TinyImagenet, respectively.
arXiv Detail & Related papers (2023-11-01T08:08:06Z) - Generating Counterfactual Hard Negative Samples for Graph Contrastive
Learning [22.200011046576716]
Graph contrastive learning is a powerful tool for unsupervised graph representation learning.
Recent works usually sample negative samples from the same training batch with the positive samples, or from an external irrelevant graph.
We propose a novel method to utilize textbfCounterfactual mechanism to generate artificial hard negative samples for textbfContrastive learning.
arXiv Detail & Related papers (2022-07-01T02:19:59Z) - Hard Negative Sampling Strategies for Contrastive Representation
Learning [4.1531215150301035]
UnReMix is a hard negative sampling strategy that takes into account anchor similarity, model uncertainty and representativeness.
Experimental results on several benchmarks show that UnReMix improves negative sample selection, and subsequently downstream performance when compared to state-of-the-art contrastive learning methods.
arXiv Detail & Related papers (2022-06-02T17:55:15Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.