Adversarial Contrastive Self-Supervised Learning
- URL: http://arxiv.org/abs/2202.13072v1
- Date: Sat, 26 Feb 2022 05:57:45 GMT
- Title: Adversarial Contrastive Self-Supervised Learning
- Authors: Wentao Zhu, Hang Shang, Tingxun Lv, Chao Liao, Sen Yang, Ji Liu
- Abstract summary: We present a novel self-supervised deep learning paradigm based on online hard negative pair mining.
We derive a new triplet-like loss considering both positive sample pairs and mined hard negative sample pairs.
- Score: 13.534367890379853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, learning from vast unlabeled data, especially self-supervised
learning, has been emerging and attracted widespread attention. Self-supervised
learning followed by the supervised fine-tuning on a few labeled examples can
significantly improve label efficiency and outperform standard supervised
training using fully annotated data. In this work, we present a novel
self-supervised deep learning paradigm based on online hard negative pair
mining. Specifically, we design a student-teacher network to generate
multi-view of the data for self-supervised learning and integrate hard negative
pair mining into the training. Then we derive a new triplet-like loss
considering both positive sample pairs and mined hard negative sample pairs.
Extensive experiments demonstrate the effectiveness of the proposed method and
its components on ILSVRC-2012.
Related papers
- Time-Series Contrastive Learning against False Negatives and Class Imbalance [17.43801009251228]
We conduct theoretical analysis and find they have overlooked the fundamental issues: false negatives and class imbalance inherent in the InfoNCE loss-based framework.
We introduce a straightforward modification grounded in the SimCLR framework, universally to models engaged in the instance discrimination task.
We perform semi-supervised consistency classification and enhance the representative ability of minority classes.
arXiv Detail & Related papers (2023-12-19T08:38:03Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Contrastive Learning with Boosted Memorization [36.957895270908324]
Self-supervised learning has achieved a great success in the representation learning of visual and textual data.
Recent attempts to consider self-supervised long-tailed learning are made by rebalancing in the loss perspective or the model perspective.
We propose a novel Boosted Contrastive Learning (BCL) method to enhance the long-tailed learning in the label-unaware context.
arXiv Detail & Related papers (2022-05-25T11:54:22Z) - Reducing Label Effort: Self-Supervised meets Active Learning [32.4747118398236]
Recent developments in self-training have achieved very impressive results rivaling supervised learning on some datasets.
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort.
The performance gap between active learning trained either with self-training or from scratch diminishes as we approach to the point where almost half of the dataset is labeled.
arXiv Detail & Related papers (2021-08-25T20:04:44Z) - Co-learning: Learning from Noisy Labels with Self-supervision [28.266156561454327]
Self-supervised learning works in the absence of labels and thus eliminates the negative impact of noisy labels.
Motivated by co-training with both supervised learning view and self-supervised learning view, we propose a simple yet effective method called Co-learning for learning with noisy labels.
arXiv Detail & Related papers (2021-08-05T06:20:51Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Probing Negative Sampling Strategies to Learn GraphRepresentations via
Unsupervised Contrastive Learning [4.909151538536424]
Graph representation learning has long been an important yet challenging task for various real-world applications.
Inspired by recent advances in unsupervised contrastive learning, this paper is motivated to investigate how the node-wise contrastive learning could be performed.
arXiv Detail & Related papers (2021-04-13T15:53:48Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.