Improving Corruption and Adversarial Robustness by Enhancing Weak
Subnets
- URL: http://arxiv.org/abs/2201.12765v1
- Date: Sun, 30 Jan 2022 09:36:19 GMT
- Title: Improving Corruption and Adversarial Robustness by Enhancing Weak
Subnets
- Authors: Yong Guo, David Stutz, Bernt Schiele
- Abstract summary: We propose a novel robust training method which explicitly identifies and enhances weaks during training to improve robustness.
Specifically, we develop a search algorithm to find particularly weaks and propose to explicitly strengthen them via knowledge distillation from the full network.
We show that our EWS greatly improves the robustness against corrupted images as well as the accuracy on clean data.
- Score: 91.9346332103637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved great success in many computer vision
tasks. However, deep networks have been shown to be very susceptible to
corrupted or adversarial images, which often result in significant performance
drops. In this paper, we observe that weak subnetwork (subnet) performance is
correlated with a lack of robustness against corruptions and adversarial
attacks. Based on that observation, we propose a novel robust training method
which explicitly identifies and enhances weak subnets (EWS) during training to
improve robustness. Specifically, we develop a search algorithm to find
particularly weak subnets and propose to explicitly strengthen them via
knowledge distillation from the full network. We show that our EWS greatly
improves the robustness against corrupted images as well as the accuracy on
clean data. Being complementary to many state-of-the-art data augmentation
approaches, EWS consistently improves corruption robustness on top of many of
these approaches. Moreover, EWS is also able to boost the adversarial
robustness when combined with popular adversarial training methods.
Related papers
- Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing [5.1024659285813785]
Adversarial training has been the most successful defense against such adversarial attacks.
We propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training.
Our method achieves state-of-the-art robustness against $ell_infty$-norm constrained attacks.
arXiv Detail & Related papers (2023-03-24T15:41:40Z) - Rethinking Robust Contrastive Learning from the Adversarial Perspective [2.3333090554192615]
We find significant disparities between adversarial and clean representations in standard-trained networks.
adversarial training mitigates these disparities and fosters the convergence of representations toward a universal set.
arXiv Detail & Related papers (2023-02-05T22:43:50Z) - AugRmixAT: A Data Processing and Training Method for Improving Multiple
Robustness and Generalization Performance [10.245536402327096]
Much previous work has been proposed to improve specific robustness of deep neural network models.
In this paper, we propose a new data processing and training method, called AugRmixAT, which can simultaneously improve the generalization ability and multiple robustness of neural network models.
arXiv Detail & Related papers (2022-07-21T04:02:24Z) - Understanding Robust Learning through the Lens of Representation
Similarities [37.66877172364004]
robustness to adversarial examples has emerged as a desirable property for deep neural networks (DNNs)
In this paper, we aim to understand how the properties of representations learned by robust training differ from those obtained from standard, non-robust training.
arXiv Detail & Related papers (2022-06-20T16:06:20Z) - Sparsity Winning Twice: Better Robust Generalization from More Efficient
Training [94.92954973680914]
We introduce two alternatives for sparse adversarial training: (i) static sparsity and (ii) dynamic sparsity.
We find both methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting.
Our approaches can be combined with existing regularizers, establishing new state-of-the-art results in adversarial training.
arXiv Detail & Related papers (2022-02-20T15:52:08Z) - Rethinking Clustering for Robustness [56.14672993686335]
ClusTR is a clustering-based and adversary-free training framework to learn robust models.
textitClusTR outperforms adversarially-trained networks by up to $4%$ under strong PGD attacks.
arXiv Detail & Related papers (2020-06-13T16:55:51Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.