Contrastive Learning with Negative Sampling Correction
- URL: http://arxiv.org/abs/2401.08690v1
- Date: Sat, 13 Jan 2024 11:18:18 GMT
- Title: Contrastive Learning with Negative Sampling Correction
- Authors: Lu Wang, Chao Du, Pu Zhao, Chuan Luo, Zhangchi Zhu, Bo Qiao, Wei
Zhang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
- Abstract summary: We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
- Score: 52.990001829393506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As one of the most effective self-supervised representation learning methods,
contrastive learning (CL) relies on multiple negative pairs to contrast against
each positive pair. In the standard practice of contrastive learning, data
augmentation methods are utilized to generate both positive and negative pairs.
While existing works have been focusing on improving the positive sampling, the
negative sampling process is often overlooked. In fact, the generated negative
samples are often polluted by positive samples, which leads to a biased loss
and performance degradation. To correct the negative sampling bias, we propose
a novel contrastive learning method named Positive-Unlabeled Contrastive
Learning (PUCL). PUCL treats the generated negative samples as unlabeled
samples and uses information from positive samples to correct bias in
contrastive loss. We prove that the corrected loss used in PUCL only incurs a
negligible bias compared to the unbiased contrastive loss. PUCL can be applied
to general contrastive learning problems and outperforms state-of-the-art
methods on various image and graph classification tasks. The code of PUCL is in
the supplementary file.
Related papers
- Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Robust Positive-Unlabeled Learning via Noise Negative Sample
Self-correction [48.929877651182885]
Learning from positive and unlabeled data is known as positive-unlabeled (PU) learning in literature.
We propose a new robust PU learning method with a training strategy motivated by the nature of human learning.
arXiv Detail & Related papers (2023-08-01T04:34:52Z) - Clustering-Aware Negative Sampling for Unsupervised Sentence
Representation [24.15096466098421]
ClusterNS is a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning.
We apply a modified K-means clustering algorithm to supply hard negatives and recognize in-batch false negatives during training.
arXiv Detail & Related papers (2023-05-17T02:06:47Z) - Synthetic Hard Negative Samples for Contrastive Learning [8.776888865665024]
This paper proposes a novel feature-level method, namely sampling synthetic hard negative samples for contrastive learning (SSCL)
We generate more and harder negative samples by mixing negative samples, and then sample them by controlling the contrast of anchor sample with the other negative samples.
Our proposed method improves the classification performance on different image datasets and can be readily integrated into existing methods.
arXiv Detail & Related papers (2023-04-06T09:54:35Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.