AugRmixAT: A Data Processing and Training Method for Improving Multiple
Robustness and Generalization Performance
- URL: http://arxiv.org/abs/2207.10290v1
- Date: Thu, 21 Jul 2022 04:02:24 GMT
- Title: AugRmixAT: A Data Processing and Training Method for Improving Multiple
Robustness and Generalization Performance
- Authors: Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
- Abstract summary: Much previous work has been proposed to improve specific robustness of deep neural network models.
In this paper, we propose a new data processing and training method, called AugRmixAT, which can simultaneously improve the generalization ability and multiple robustness of neural network models.
- Score: 10.245536402327096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are powerful, but they also have shortcomings such as
their sensitivity to adversarial examples, noise, blur, occlusion, etc.
Moreover, ensuring the reliability and robustness of deep neural network models
is crucial for their application in safety-critical areas. Much previous work
has been proposed to improve specific robustness. However, we find that the
specific robustness is often improved at the sacrifice of the additional
robustness or generalization ability of the neural network model. In
particular, adversarial training methods significantly hurt the generalization
performance on unperturbed data when improving adversarial robustness. In this
paper, we propose a new data processing and training method, called AugRmixAT,
which can simultaneously improve the generalization ability and multiple
robustness of neural network models. Finally, we validate the effectiveness of
AugRmixAT on the CIFAR-10/100 and Tiny-ImageNet datasets. The experiments
demonstrate that AugRmixAT can improve the model's generalization performance
while enhancing the white-box robustness, black-box robustness, common
corruption robustness, and partial occlusion robustness.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability
Training, and Noise Injections [46.745755900939216]
We introduce NoisyMix, a training scheme that combines data augmentations with stability training and noise injections to improve both model robustness and in-domain accuracy.
We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P.
arXiv Detail & Related papers (2022-02-02T19:53:35Z) - Improving Corruption and Adversarial Robustness by Enhancing Weak
Subnets [91.9346332103637]
We propose a novel robust training method which explicitly identifies and enhances weaks during training to improve robustness.
Specifically, we develop a search algorithm to find particularly weaks and propose to explicitly strengthen them via knowledge distillation from the full network.
We show that our EWS greatly improves the robustness against corrupted images as well as the accuracy on clean data.
arXiv Detail & Related papers (2022-01-30T09:36:19Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.