Adversarial and Natural Perturbations for General Robustness
- URL: http://arxiv.org/abs/2010.01401v1
- Date: Sat, 3 Oct 2020 17:53:18 GMT
- Title: Adversarial and Natural Perturbations for General Robustness
- Authors: Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders
- Abstract summary: We evaluate the robustness of neural networks against natural perturbations before and after robustification.
We show that although adversarial training improves the performance of the networks against adversarial perturbations, it leads to drop in the performance for naturally perturbed samples besides clean samples.
In contrast, natural perturbations like elastic deformations, occlusions and wave does not only improve the performance against natural perturbations, but also lead to improvement in the performance for the adversarial perturbations.
- Score: 11.537633174586956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we aim to explore the general robustness of neural network
classifiers by utilizing adversarial as well as natural perturbations.
Different from previous works which mainly focus on studying the robustness of
neural networks against adversarial perturbations, we also evaluate their
robustness on natural perturbations before and after robustification. After
standardizing the comparison between adversarial and natural perturbations, we
demonstrate that although adversarial training improves the performance of the
networks against adversarial perturbations, it leads to drop in the performance
for naturally perturbed samples besides clean samples. In contrast, natural
perturbations like elastic deformations, occlusions and wave does not only
improve the performance against natural perturbations, but also lead to
improvement in the performance for the adversarial perturbations. Additionally
they do not drop the accuracy on the clean images.
Related papers
- Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - Mitigating Feature Gap for Adversarial Robustness by Feature
Disentanglement [61.048842737581865]
Adversarial fine-tuning methods aim to enhance adversarial robustness through fine-tuning the naturally pre-trained model in an adversarial training manner.
We propose a disentanglement-based approach to explicitly model and remove the latent features that cause the feature gap.
Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.
arXiv Detail & Related papers (2024-01-26T08:38:57Z) - Towards Improving Robustness Against Common Corruptions in Object
Detectors Using Adversarial Contrastive Learning [10.27974860479791]
This paper proposes an innovative adversarial contrastive learning framework to enhance neural network robustness simultaneously against adversarial attacks and common corruptions.
By focusing on improving performance under adversarial and real-world conditions, our approach aims to bolster the robustness of neural networks in safety-critical applications.
arXiv Detail & Related papers (2023-11-14T06:13:52Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - On the Over-Memorization During Natural, Robust and Catastrophic Overfitting [58.613079045392446]
Overfitting negatively impacts the generalization ability of deep neural networks (DNNs) in both natural and adversarial training.
We propose a general framework, Distraction Over-Memorization (DOM), which explicitly prevents over-memorization.
arXiv Detail & Related papers (2023-10-13T04:14:51Z) - Understanding Robust Overfitting from the Feature Generalization Perspective [61.770805867606796]
Adversarial training (AT) constructs robust neural networks by incorporating adversarial perturbations into natural data.
It is plagued by the issue of robust overfitting (RO), which severely damages the model's robustness.
In this paper, we investigate RO from a novel feature generalization perspective.
arXiv Detail & Related papers (2023-10-01T07:57:03Z) - Wavelets Beat Monkeys at Adversarial Robustness [0.8702432681310401]
We show how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
arXiv Detail & Related papers (2023-04-19T03:41:30Z) - Natural Perturbed Training for General Robustness of Neural Network
Classifiers [0.0]
Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images.
For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance.
arXiv Detail & Related papers (2021-03-21T11:47:38Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Analysis of Random Perturbations for Robust Convolutional Neural
Networks [11.325672232682903]
Recent work has extensively shown that randomized perturbations of neural networks can improve robustness to adversarial attacks.
We show that perturbation based defenses offer almost no robustness to adaptive attacks unless these perturbations are observed during training.
adversarial examples in a close neighborhood of original inputs show an elevated sensitivity to perturbations in first and second-order analyses.
arXiv Detail & Related papers (2020-02-08T03:46:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.