Natural Perturbed Training for General Robustness of Neural Network
Classifiers
- URL: http://arxiv.org/abs/2103.11372v1
- Date: Sun, 21 Mar 2021 11:47:38 GMT
- Title: Natural Perturbed Training for General Robustness of Neural Network
Classifiers
- Authors: Sadaf Gulshad and Arnold Smeulders
- Abstract summary: Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images.
For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on the robustness of neural networks for classification. To permit a
fair comparison between methods to achieve robustness, we first introduce a
standard based on the mensuration of a classifier's degradation. Then, we
propose natural perturbed training to robustify the network. Natural
perturbations will be encountered in practice: the difference of two images of
the same object may be approximated by an elastic deformation (when they have
slightly different viewing angles), by occlusions (when they hide differently
behind objects), or by saturation, Gaussian noise etc. Training some fraction
of the epochs on random versions of such variations will help the classifier to
learn better. We conduct extensive experiments on six datasets of varying sizes
and granularity. Natural perturbed learning show better and much faster
performance than adversarial training on clean, adversarial as well as natural
perturbed images. It even improves general robustness on perturbations not seen
during the training. For Cifar-10 and STL-10 natural perturbed training even
improves the accuracy for clean data and reaches the state of the art
performance. Ablation studies verify the effectiveness of natural perturbed
training.
Related papers
- Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders [101.42201747763178]
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled.
Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method.
arXiv Detail & Related papers (2024-05-02T16:49:25Z) - Topology-preserving Adversarial Training for Alleviating Natural Accuracy Degradation [27.11004064848789]
Adversarial training has suffered from the natural accuracy degradation problem.
We propose Topology-pReserving Adversarial traINing (TRAIN) to alleviate the problem.
We show TRAIN achieves up to 8.86% improvement in natural accuracy and 6.33% improvement in robust accuracy.
arXiv Detail & Related papers (2023-11-29T13:05:06Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - Robustness-via-Synthesis: Robust Training with Generative Adversarial
Perturbations [10.140147080535224]
Adversarial training with first-order attacks has been one of the most effective defenses against adversarial perturbations to this day.
This study presents a robust training algorithm where the adversarial perturbations are automatically synthesized from a random vector using a generator network.
Experimental results show that the proposed approach attains comparable robustness with various gradient-based and generative robust training techniques.
arXiv Detail & Related papers (2021-08-22T13:15:24Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Adversarial and Natural Perturbations for General Robustness [11.537633174586956]
We evaluate the robustness of neural networks against natural perturbations before and after robustification.
We show that although adversarial training improves the performance of the networks against adversarial perturbations, it leads to drop in the performance for naturally perturbed samples besides clean samples.
In contrast, natural perturbations like elastic deformations, occlusions and wave does not only improve the performance against natural perturbations, but also lead to improvement in the performance for the adversarial perturbations.
arXiv Detail & Related papers (2020-10-03T17:53:18Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z) - Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data [104.69689574851724]
We propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning.
Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data.
arXiv Detail & Related papers (2020-05-20T13:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.