AdCo: Adversarial Contrast for Efficient Learning of Unsupervised
Representations from Self-Trained Negative Adversaries
- URL: http://arxiv.org/abs/2011.08435v5
- Date: Fri, 5 Mar 2021 07:01:19 GMT
- Title: AdCo: Adversarial Contrast for Efficient Learning of Unsupervised
Representations from Self-Trained Negative Adversaries
- Authors: Qianjiang Hu, Xiao Wang, Wei Hu, Guo-Jun Qi
- Abstract summary: We propose an Adrial Contrastive (AdCo) model to train representations that are hard to discriminate against positive queries.
Experiment results demonstrate the proposed Adrial Contrastive (AdCo) model achieves superior performances.
- Score: 55.059844800514774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning relies on constructing a collection of negative examples
that are sufficiently hard to discriminate against positive queries when their
representations are self-trained. Existing contrastive learning methods either
maintain a queue of negative samples over minibatches while only a small
portion of them are updated in an iteration, or only use the other examples
from the current minibatch as negatives. They could not closely track the
change of the learned representation over iterations by updating the entire
queue as a whole, or discard the useful information from the past minibatches.
Alternatively, we present to directly learn a set of negative adversaries
playing against the self-trained representation. Two players, the
representation network and negative adversaries, are alternately updated to
obtain the most challenging negative examples against which the representation
of positive queries will be trained to discriminate. We further show that the
negative adversaries are updated towards a weighted combination of positive
queries by maximizing the adversarial contrastive loss, thereby allowing them
to closely track the change of representations over time. Experiment results
demonstrate the proposed Adversarial Contrastive (AdCo) model not only achieves
superior performances (a top-1 accuracy of 73.2\% over 200 epochs and 75.7\%
over 800 epochs with linear evaluation on ImageNet), but also can be
pre-trained more efficiently with fewer epochs.
Related papers
- The Bad Batches: Enhancing Self-Supervised Learning in Image Classification Through Representative Batch Curation [1.519321208145928]
The pursuit of learning robust representations without human supervision is a longstanding challenge.
This paper attempts to alleviate the influence of false positive and false negative pairs by employing pairwise similarity calculations through the Fr'echet ResNet Distance (FRD)
The effectiveness of the proposed method is substantiated by empirical results, where a linear classifier trained on self-supervised contrastive representations achieved an impressive 87.74% top-1 accuracy on STL10 and 99.31% on the Flower102 dataset.
arXiv Detail & Related papers (2024-03-28T17:04:07Z) - Better Sampling of Negatives for Distantly Supervised Named Entity
Recognition [39.264878763160766]
We propose a simple and straightforward approach for selecting the top negative samples that have high similarities with all the positive samples for training.
Our method achieves consistent performance improvements on four distantly supervised NER datasets.
arXiv Detail & Related papers (2023-05-22T15:35:39Z) - Debiased Contrastive Learning of Unsupervised Sentence Representations [88.58117410398759]
Contrastive learning is effective in improving pre-trained language models (PLM) to derive high-quality sentence representations.
Previous works mostly adopt in-batch negatives or sample from training data at random.
We present a new framework textbfDCLR to alleviate the influence of these improper negatives.
arXiv Detail & Related papers (2022-05-02T05:07:43Z) - CaCo: Both Positive and Negative Samples are Directly Learnable via
Cooperative-adversarial Contrastive Learning [45.68097757313092]
We train an encoder by distinguishing positive samples from negative ones given query anchors.
We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss.
The proposed method achieves 71.3% and 75.3% in top-1 accuracy respectively over 200 and 800 epochs of pre-training ResNet-50 backbone on ImageNet1K.
arXiv Detail & Related papers (2022-03-27T18:50:39Z) - Investigating the Role of Negatives in Contrastive Representation
Learning [59.30700308648194]
Noise contrastive learning is a popular technique for unsupervised representation learning.
We focus on disambiguating the role of one of these parameters: the number of negative examples.
We find that the results broadly agree with our theory, while our vision experiments are murkier with performance sometimes even being insensitive to the number of negatives.
arXiv Detail & Related papers (2021-06-18T06:44:16Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.