Contrastive Learning with Adversarial Examples
- URL: http://arxiv.org/abs/2010.12050v1
- Date: Thu, 22 Oct 2020 20:45:10 GMT
- Title: Contrastive Learning with Adversarial Examples
- Authors: Chih-Hui Ho, Nuno Vasconcelos
- Abstract summary: Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
- Score: 79.39156814887133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) is a popular technique for self-supervised learning
(SSL) of visual representations. It uses pairs of augmentations of unlabeled
training examples to define a classification task for pretext learning of a
deep embedding. Despite extensive works in augmentation procedures, prior works
do not address the selection of challenging negative pairs, as images within a
sampled batch are treated independently. This paper addresses the problem, by
introducing a new family of adversarial examples for constrastive learning and
using these examples to define a new adversarial training algorithm for SSL,
denoted as CLAE. When compared to standard CL, the use of adversarial examples
creates more challenging positive pairs and adversarial training produces
harder negative pairs by accounting for all images in a batch during the
optimization. CLAE is compatible with many CL methods in the literature.
Experiments show that it improves the performance of several existing CL
baselines on multiple datasets.
Related papers
- Words Matter: Leveraging Individual Text Embeddings for Code Generation in CLIP Test-Time Adaptation [21.20806568508201]
We show how to leverage class text information to mitigate distribution drifts encountered by vision-language models (VLMs) during test-time inference.
We propose to generate pseudo-labels for the test-time samples by exploiting generic class text embeddings as fixed centroids of a label assignment problem.
Experiments on multiple popular test-time adaptation benchmarks presenting diverse complexity empirically show the superiority of CLIP-OT.
arXiv Detail & Related papers (2024-11-26T00:15:37Z) - ParaICL: Towards Robust Parallel In-Context Learning [74.38022919598443]
Large language models (LLMs) have become the norm in natural language processing.
Few-shot in-context learning (ICL) relies on the choice of few-shot demonstration examples.
We propose a novel method named parallel in-context learning (ParaICL)
arXiv Detail & Related papers (2024-03-31T05:56:15Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - Clustering-Aware Negative Sampling for Unsupervised Sentence
Representation [24.15096466098421]
ClusterNS is a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning.
We apply a modified K-means clustering algorithm to supply hard negatives and recognize in-batch false negatives during training.
arXiv Detail & Related papers (2023-05-17T02:06:47Z) - Adaptive Soft Contrastive Learning [19.45520684918576]
This paper proposes an adaptive method that introduces soft inter-sample relations, namely Adaptive Soft Contrastive Learning (ASCL)
As an effective and concise plug-in module for existing self-supervised learning frameworks, ASCL achieves the best performance on several benchmarks.
arXiv Detail & Related papers (2022-07-22T16:01:07Z) - Adversarial Contrastive Learning via Asymmetric InfoNCE [64.42740292752069]
We propose to treat adversarial samples unequally when contrasted with an asymmetric InfoNCE objective.
In the asymmetric fashion, the adverse impacts of conflicting objectives between CL and adversarial learning can be effectively mitigated.
Experiments show that our approach consistently outperforms existing Adversarial CL methods.
arXiv Detail & Related papers (2022-07-18T04:14:36Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - InsCLR: Improving Instance Retrieval with Self-Supervision [30.36455490844235]
We find that fine-tuning using the recently developed self-supervised (SSL) learning methods, such as SimCLR and MoCo, fails to improve the performance of instance retrieval.
To overcome this problem, we propose InsCLR, a new SSL method that builds on the textitinstance-level contrast.
InsCLR achieves similar or even better performance than the state-of-the-art SSL methods on instance retrieval.
arXiv Detail & Related papers (2021-12-02T16:21:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.