Revisiting Batch Normalization for Improving Corruption Robustness
- URL: http://arxiv.org/abs/2010.03630v4
- Date: Thu, 28 Jan 2021 08:35:52 GMT
- Title: Revisiting Batch Normalization for Improving Corruption Robustness
- Authors: Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
- Abstract summary: We interpret corruption robustness as a domain shift and propose to rectify batch normalization statistics for improving model robustness.
We find that simply estimating and adapting the BN statistics on a few representation samples, without retraining the model, improves the corruption robustness by a large margin.
- Score: 85.20742045853738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of DNNs trained on clean images has been shown to decrease
when the test images have common corruptions. In this work, we interpret
corruption robustness as a domain shift and propose to rectify batch
normalization (BN) statistics for improving model robustness. This is motivated
by perceiving the shift from the clean domain to the corruption domain as a
style shift that is represented by the BN statistics. We find that simply
estimating and adapting the BN statistics on a few (32 for instance)
representation samples, without retraining the model, improves the corruption
robustness by a large margin on several benchmark datasets with a wide range of
model architectures. For example, on ImageNet-C, statistics adaptation improves
the top1 accuracy of ResNet50 from 39.2% to 48.7%. Moreover, we find that this
technique can further improve state-of-the-art robust models from 58.1% to
63.3%.
Related papers
- Improving Out-of-Distribution Data Handling and Corruption Resistance via Modern Hopfield Networks [0.0]
This study explores the potential of Modern Hopfield Networks (MHN) in improving the ability of computer vision models to handle out-of-distribution data.
We suggest integrating MHN into the baseline models to enhance their robustness.
Our research shows that the proposed integration consistently improves model performance on the MNIST-C dataset.
arXiv Detail & Related papers (2024-08-21T03:26:16Z) - FFT-based Selection and Optimization of Statistics for Robust Recognition of Severely Corrupted Images [19.07004663565609]
This paper presents a novel approach to improve robustness of any classification model, especially on severely corrupted images.
Our method (FROST) employs high-frequency features to detect input image corruption type, and select layer-wise feature normalization statistics.
FROST provides the state-of-the-art results for different models and datasets, outperforming competitors on ImageNet-C by up to 37.1% relative gain.
arXiv Detail & Related papers (2024-03-21T12:01:54Z) - Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Removing Batch Normalization Boosts Adversarial Training [83.08844497295148]
Adversarial training (AT) defends deep neural networks against adversarial attacks.
A major bottleneck is the widely used batch normalization (BN), which struggles to model the different statistics of clean and adversarial training samples in AT.
Our normalizer-free robust training (NoFrost) method extends recent advances in normalizer-free networks to AT.
arXiv Detail & Related papers (2022-07-04T01:39:37Z) - Benchmarks for Corruption Invariant Person Re-identification [31.919264399996475]
We study corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.
transformer-based models are more robust towards corrupted images, compared with CNN-based models.
Cross-dataset generalization improves with corruption robustness increases.
arXiv Detail & Related papers (2021-11-01T12:14:28Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness [78.6626755563546]
Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
arXiv Detail & Related papers (2021-02-22T18:58:39Z) - Improving robustness against common corruptions by covariate shift
adaptation [29.27289096759534]
State-of-the-art machine vision models are vulnerable to image corruptions like blurring or compression artefacts.
We argue that popular benchmarks to measure model robustness against common corruptions underestimate model robustness in many (but not all) application scenarios.
arXiv Detail & Related papers (2020-06-30T17:01:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.