Enhancing Adversarial Contrastive Learning via Adversarial Invariant
Regularization
- URL: http://arxiv.org/abs/2305.00374v2
- Date: Mon, 23 Oct 2023 12:46:38 GMT
- Title: Enhancing Adversarial Contrastive Learning via Adversarial Invariant
Regularization
- Authors: Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan
Kankanhalli
- Abstract summary: Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL)
In this paper, we propose adversarial invariant regularization (AIR) to enforce independence from style factors.
- Score: 59.77647907277523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial contrastive learning (ACL) is a technique that enhances standard
contrastive learning (SCL) by incorporating adversarial data to learn a robust
representation that can withstand adversarial attacks and common corruptions
without requiring costly annotations. To improve transferability, the existing
work introduced the standard invariant regularization (SIR) to impose
style-independence property to SCL, which can exempt the impact of nuisance
style factors in the standard representation. However, it is unclear how the
style-independence property benefits ACL-learned robust representations. In
this paper, we leverage the technique of causal reasoning to interpret the ACL
and propose adversarial invariant regularization (AIR) to enforce independence
from style factors. We regulate the ACL using both SIR and AIR to output the
robust representation. Theoretically, we show that AIR implicitly encourages
the representational distance between different views of natural data and their
adversarial variants to be independent of style factors. Empirically, our
experimental results show that invariant regularization significantly improves
the performance of state-of-the-art ACL methods in terms of both standard
generalization and robustness on downstream tasks. To the best of our
knowledge, we are the first to apply causal reasoning to interpret ACL and
develop AIR for enhancing ACL-learned robust representations. Our source code
is at https://github.com/GodXuxilie/Enhancing_ACL_via_AIR.
Related papers
- On the Effectiveness of Supervision in Asymmetric Non-Contrastive Learning [5.123232962822044]
asymmetric non-contrastive learning (ANCL) often outperforms its contrastive learning counterpart in self-supervised representation learning.
We study ANCL for supervised representation learning, coined SupSiam and SupBYOL, leveraging labels in ANCL to achieve better representations.
Our analysis reveals that providing supervision to ANCL reduces intra-class variance, and the contribution of supervision should be adjusted to achieve the best performance.
arXiv Detail & Related papers (2024-06-16T06:43:15Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Positional Information Matters for Invariant In-Context Learning: A Case
Study of Simple Function Classes [39.08988313527199]
In-context learning (ICL) refers to the ability of a model to condition on a few in-context demonstrations to generate the answer for a new query input.
Despite the impressive ICL ability of LLMs, ICL in LLMs is sensitive to input demonstrations and limited to short context lengths.
arXiv Detail & Related papers (2023-11-30T02:26:55Z) - Supervised Adversarial Contrastive Learning for Emotion Recognition in
Conversations [24.542445315345464]
We propose a framework for learning class-spread structured representations in a supervised manner.
It can effectively utilize label-level feature consistency and retain fine-grained intra-class features.
Under the framework with CAT, we develop a sequence-based SACL-LSTM to learn label-consistent and context-robust features.
arXiv Detail & Related papers (2023-06-02T12:52:38Z) - What and How does In-Context Learning Learn? Bayesian Model Averaging,
Parameterization, and Generalization [111.55277952086155]
We study In-Context Learning (ICL) by addressing several open questions.
We show that, without updating the neural network parameters, ICL implicitly implements the Bayesian model averaging algorithm.
We prove that the error of pretrained model is bounded by a sum of an approximation error and a generalization error.
arXiv Detail & Related papers (2023-05-30T21:23:47Z) - On the Effectiveness of Equivariant Regularization for Robust Online
Continual Learning [17.995662644298974]
Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks and future ones.
Recent research has shown that self-supervision can produce versatile models that can generalize well to diverse downstream tasks.
We propose Continual Learning via Equivariant Regularization (CLER), an OCL approach that leverages equivariant tasks for self-supervision.
arXiv Detail & Related papers (2023-05-05T16:10:31Z) - Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset
Selection [59.77647907277523]
Adversarial contrast learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks.
ACL needs tremendous running time to generate the adversarial variants of all training data.
This paper proposes a robustness-aware coreset selection (RCS) method to speed up ACL.
arXiv Detail & Related papers (2023-02-08T03:20:14Z) - When Does Contrastive Learning Preserve Adversarial Robustness from
Pretraining to Finetuning? [99.4914671654374]
We propose AdvCL, a novel adversarial contrastive pretraining framework.
We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency.
arXiv Detail & Related papers (2021-11-01T17:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.