Bridged Adversarial Training
- URL: http://arxiv.org/abs/2108.11135v1
- Date: Wed, 25 Aug 2021 09:11:59 GMT
- Title: Bridged Adversarial Training
- Authors: Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee
- Abstract summary: We show that adversarially trained models might have significantly different characteristics in terms of margin and smoothness, even they show similar robustness.
Inspired by the observation, we investigate the effect of different regularizers and discover the negative effect of the smoothness regularizer on maximizing the margin.
We propose a new method called bridged adversarial training that mitigates the negative effect by bridging the gap between clean and adversarial examples.
- Score: 6.925055322530057
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adversarial robustness is considered as a required property of deep neural
networks. In this study, we discover that adversarially trained models might
have significantly different characteristics in terms of margin and smoothness,
even they show similar robustness. Inspired by the observation, we investigate
the effect of different regularizers and discover the negative effect of the
smoothness regularizer on maximizing the margin. Based on the analyses, we
propose a new method called bridged adversarial training that mitigates the
negative effect by bridging the gap between clean and adversarial examples. We
provide theoretical and empirical evidence that the proposed method provides
stable and better robustness, especially for large perturbations.
Related papers
- Exploring the Adversarial Frontier: Quantifying Robustness via
Adversarial Hypervolume [18.4516572499628]
We propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities.
We adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities.
This research contributes a new measure of robustness and establishes a standard for assessing benchmarking and the resilience of current and future defensive models against adversarial threats.
arXiv Detail & Related papers (2024-03-08T07:03:18Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - Mitigating Feature Gap for Adversarial Robustness by Feature
Disentanglement [61.048842737581865]
Adversarial fine-tuning methods aim to enhance adversarial robustness through fine-tuning the naturally pre-trained model in an adversarial training manner.
We propose a disentanglement-based approach to explicitly model and remove the latent features that cause the feature gap.
Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.
arXiv Detail & Related papers (2024-01-26T08:38:57Z) - Beyond Empirical Risk Minimization: Local Structure Preserving
Regularization for Improving Adversarial Robustness [28.853413482357634]
Local Structure Preserving (LSP) regularization aims to preserve the local structure of the input space in the learned embedding space.
In this work, we propose a novel Local Structure Preserving (LSP) regularization, which aims to preserve the local structure of the input space in the learned embedding space.
arXiv Detail & Related papers (2023-03-29T17:18:58Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Clustering Effect of (Linearized) Adversarial Robust Models [60.25668525218051]
We propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting.
Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.
arXiv Detail & Related papers (2021-11-25T05:51:03Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.