Rethinking Robust Contrastive Learning from the Adversarial Perspective
- URL: http://arxiv.org/abs/2302.02502v2
- Date: Tue, 6 Jun 2023 20:08:59 GMT
- Title: Rethinking Robust Contrastive Learning from the Adversarial Perspective
- Authors: Fatemeh Ghofrani, Mehdi Yaghouti, Pooyan Jamshidi
- Abstract summary: We find significant disparities between adversarial and clean representations in standard-trained networks.
adversarial training mitigates these disparities and fosters the convergence of representations toward a universal set.
- Score: 2.3333090554192615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To advance the understanding of robust deep learning, we delve into the
effects of adversarial training on self-supervised and supervised contrastive
learning alongside supervised learning. Our analysis uncovers significant
disparities between adversarial and clean representations in standard-trained
networks across various learning algorithms. Remarkably, adversarial training
mitigates these disparities and fosters the convergence of representations
toward a universal set, regardless of the learning scheme used. Additionally,
increasing the similarity between adversarial and clean representations,
particularly near the end of the network, enhances network robustness. These
findings offer valuable insights for designing and training effective and
robust deep learning networks. Our code is released at
\textcolor{magenta}{\url{https://github.com/softsys4ai/CL-Robustness}}.
Related papers
- Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data [38.44734564565478]
We provide a theoretical understanding of adversarial examples and adversarial training algorithms from the perspective of feature learning theory.
We show that the adversarial training method can provably strengthen the robust feature learning and suppress the non-robust feature learning.
arXiv Detail & Related papers (2024-10-11T03:59:49Z) - Few-Shot Adversarial Prompt Learning on Vision-Language Models [62.50622628004134]
The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention.
Previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision.
We propose a few-shot adversarial prompt framework where adapting input sequences with limited data makes significant adversarial robustness improvement.
arXiv Detail & Related papers (2024-03-21T18:28:43Z) - Can Self-Supervised Representation Learning Methods Withstand
Distribution Shifts and Corruptions? [5.706184197639971]
Self-supervised learning in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations.
This work investigates the robustness of learned representations of self-supervised learning approaches focusing on distribution shifts and image corruptions.
arXiv Detail & Related papers (2023-07-31T13:07:56Z) - Understanding Robust Learning through the Lens of Representation
Similarities [37.66877172364004]
robustness to adversarial examples has emerged as a desirable property for deep neural networks (DNNs)
In this paper, we aim to understand how the properties of representations learned by robust training differ from those obtained from standard, non-robust training.
arXiv Detail & Related papers (2022-06-20T16:06:20Z) - Adversarially robust segmentation models learn perceptually-aligned
gradients [0.0]
We show that adversarially-trained semantic segmentation networks can be used to perform image inpainting and generation.
We argue that perceptually-aligned gradients promote a better understanding of a neural network's learned representations and aid in making neural networks more interpretable.
arXiv Detail & Related papers (2022-04-03T16:04:52Z) - Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework [70.84906094606072]
We show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the adversarial accuracy.
arXiv Detail & Related papers (2021-12-08T18:54:11Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Rethinking Clustering for Robustness [56.14672993686335]
ClusTR is a clustering-based and adversary-free training framework to learn robust models.
textitClusTR outperforms adversarially-trained networks by up to $4%$ under strong PGD attacks.
arXiv Detail & Related papers (2020-06-13T16:55:51Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.