On Fragile Features and Batch Normalization in Adversarial Training
- URL: http://arxiv.org/abs/2204.12393v1
- Date: Tue, 26 Apr 2022 15:49:33 GMT
- Title: On Fragile Features and Batch Normalization in Adversarial Training
- Authors: Nils Philipp Walter, David Stutz, Bernt Schiele
- Abstract summary: We investigate the role of batch normalization (BN) in adversarial training.
BN is used in adversarial training, which is the de-facto standard to learn robust features.
Our results indicate that fragile features can be used to learn models with moderate adversarial robustness, while random features cannot.
- Score: 83.25056150489446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning architecture utilize batch normalization (BN) to
stabilize training and improve accuracy. It has been shown that the BN layers
alone are surprisingly expressive. In the context of robustness against
adversarial examples, however, BN is argued to increase vulnerability. That is,
BN helps to learn fragile features. Nevertheless, BN is still used in
adversarial training, which is the de-facto standard to learn robust features.
In order to shed light on the role of BN in adversarial training, we
investigate to what extent the expressiveness of BN can be used to robustify
fragile features in comparison to random features. On CIFAR10, we find that
adversarially fine-tuning just the BN layers can result in non-trivial
adversarial robustness. Adversarially training only the BN layers from scratch,
in contrast, is not able to convey meaningful adversarial robustness. Our
results indicate that fragile features can be used to learn models with
moderate adversarial robustness, while random features cannot
Related papers
- Unified Batch Normalization: Identifying and Alleviating the Feature
Condensation in Batch Normalization and a Unified Framework [55.22949690864962]
Batch Normalization (BN) has become an essential technique in contemporary neural network design.
We propose a two-stage unified framework called Unified Batch Normalization (UBN)
UBN significantly enhances performance across different visual backbones and different vision tasks.
arXiv Detail & Related papers (2023-11-27T16:41:31Z) - Robust Text Classification: Analyzing Prototype-Based Networks [12.247144383314177]
Prototype-Based Networks (PBNs) have been shown to be robust to noise for computer vision tasks.
We study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings.
We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties.
arXiv Detail & Related papers (2023-11-11T19:34:06Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - Understanding the Failure of Batch Normalization for Transformers in NLP [16.476194435004732]
Batch Normalization (BN) is a technique to accelerate the training of deep neural networks.
BN fails to defend its position in Natural Language Processing (NLP), which is dominated by Layer Normalization (LN)
Regularized BN (RBN) improves the performance of BN consistently and outperforms or is on par with LN on 17 out of 20 settings.
arXiv Detail & Related papers (2022-10-11T05:18:47Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Diagnosing Batch Normalization in Class Incremental Learning [39.70552266952221]
Batch normalization (BN) standardizes intermediate feature maps and has been widely validated to improve training stability and convergence.
We propose BN Tricks to address the issue by training a better feature extractor while eliminating classification bias.
We show that BN Tricks can bring significant performance gains to all adopted baselines.
arXiv Detail & Related papers (2022-02-16T12:38:43Z) - "BNN - BN = ?": Training Binary Neural Networks without Batch
Normalization [92.23297927690149]
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art binary neural networks (BNN)
We extend their framework to training BNNs, and for the first time demonstrate that BNs can be completed removed from BNN training and inference regimes.
arXiv Detail & Related papers (2021-04-16T16:46:57Z) - Batch Normalization Increases Adversarial Vulnerability and Decreases
Adversarial Transferability: A Non-Robust Feature Perspective [91.5105021619887]
Batch normalization (BN) has been widely used in modern deep neural networks (DNNs)
BN is observed to increase the model accuracy while at the cost of adversarial robustness.
It remains unclear whether BN mainly favors learning robust features (RFs) or non-robust features (NRFs)
arXiv Detail & Related papers (2020-10-07T10:24:33Z) - How Does BN Increase Collapsed Neural Network Filters? [34.886702335022015]
Filter collapse is common in deep neural networks (DNNs) with batch normalization (BN) and rectified linear activation functions (e.g. ReLU, Leaky ReLU)
We propose a simple yet effective approach named post-shifted BN (psBN), which has the same representation ability as BN while being able to automatically make BN parameters trainable again as they saturate during training.
arXiv Detail & Related papers (2020-01-30T09:00:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.