CaCo: Both Positive and Negative Samples are Directly Learnable via
Cooperative-adversarial Contrastive Learning
- URL: http://arxiv.org/abs/2203.14370v1
- Date: Sun, 27 Mar 2022 18:50:39 GMT
- Title: CaCo: Both Positive and Negative Samples are Directly Learnable via
Cooperative-adversarial Contrastive Learning
- Authors: Xiao Wang, Yuhang Huang, Dan Zeng, Guo-Jun Qi
- Abstract summary: We train an encoder by distinguishing positive samples from negative ones given query anchors.
We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss.
The proposed method achieves 71.3% and 75.3% in top-1 accuracy respectively over 200 and 800 epochs of pre-training ResNet-50 backbone on ImageNet1K.
- Score: 45.68097757313092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a representative self-supervised method, contrastive learning has achieved
great successes in unsupervised training of representations. It trains an
encoder by distinguishing positive samples from negative ones given query
anchors. These positive and negative samples play critical roles in defining
the objective to learn the discriminative encoder, avoiding it from learning
trivial features. While existing methods heuristically choose these samples, we
present a principled method where both positive and negative samples are
directly learnable end-to-end with the encoder. We show that the positive and
negative samples can be cooperatively and adversarially learned by minimizing
and maximizing the contrastive loss, respectively. This yields cooperative
positives and adversarial negatives with respect to the encoder, which are
updated to continuously track the learned representation of the query anchors
over mini-batches. The proposed method achieves 71.3% and 75.3% in top-1
accuracy respectively over 200 and 800 epochs of pre-training ResNet-50
backbone on ImageNet1K without tricks such as multi-crop or stronger
augmentations. With Multi-Crop, it can be further boosted into 75.7%. The
source code and pre-trained model are released in
https://github.com/maple-research-lab/caco.
Related papers
- Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Better Sampling of Negatives for Distantly Supervised Named Entity
Recognition [39.264878763160766]
We propose a simple and straightforward approach for selecting the top negative samples that have high similarities with all the positive samples for training.
Our method achieves consistent performance improvements on four distantly supervised NER datasets.
arXiv Detail & Related papers (2023-05-22T15:35:39Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - AdCo: Adversarial Contrast for Efficient Learning of Unsupervised
Representations from Self-Trained Negative Adversaries [55.059844800514774]
We propose an Adrial Contrastive (AdCo) model to train representations that are hard to discriminate against positive queries.
Experiment results demonstrate the proposed Adrial Contrastive (AdCo) model achieves superior performances.
arXiv Detail & Related papers (2020-11-17T05:45:46Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.