Improving robustness against common corruptions by covariate shift
adaptation
- URL: http://arxiv.org/abs/2006.16971v2
- Date: Fri, 23 Oct 2020 04:37:23 GMT
- Title: Improving robustness against common corruptions by covariate shift
adaptation
- Authors: Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland
Brendel, Matthias Bethge
- Abstract summary: State-of-the-art machine vision models are vulnerable to image corruptions like blurring or compression artefacts.
We argue that popular benchmarks to measure model robustness against common corruptions underestimate model robustness in many (but not all) application scenarios.
- Score: 29.27289096759534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today's state-of-the-art machine vision models are vulnerable to image
corruptions like blurring or compression artefacts, limiting their performance
in many real-world applications. We here argue that popular benchmarks to
measure model robustness against common corruptions (like ImageNet-C)
underestimate model robustness in many (but not all) application scenarios. The
key insight is that in many scenarios, multiple unlabeled examples of the
corruptions are available and can be used for unsupervised online adaptation.
Replacing the activation statistics estimated by batch normalization on the
training set with the statistics of the corrupted images consistently improves
the robustness across 25 different popular computer vision models. Using the
corrected statistics, ResNet-50 reaches 62.2% mCE on ImageNet-C compared to
76.7% without adaptation. With the more robust DeepAugment+AugMix model, we
improve the state of the art achieved by a ResNet50 model up to date from 53.6%
mCE to 45.4% mCE. Even adapting to a single sample improves robustness for the
ResNet-50 and AugMix models, and 32 samples are sufficient to improve the
current state of the art for a ResNet-50 architecture. We argue that results
with adapted statistics should be included whenever reporting scores in
corruption benchmarks and other out-of-distribution generalization settings.
Related papers
- Improving Out-of-Distribution Data Handling and Corruption Resistance via Modern Hopfield Networks [0.0]
This study explores the potential of Modern Hopfield Networks (MHN) in improving the ability of computer vision models to handle out-of-distribution data.
We suggest integrating MHN into the baseline models to enhance their robustness.
Our research shows that the proposed integration consistently improves model performance on the MNIST-C dataset.
arXiv Detail & Related papers (2024-08-21T03:26:16Z) - Dynamic Pre-training: Towards Efficient and Scalable All-in-One Image Restoration [100.54419875604721]
All-in-one image restoration tackles different types of degradations with a unified model instead of having task-specific, non-generic models for each degradation.
We propose DyNet, a dynamic family of networks designed in an encoder-decoder style for all-in-one image restoration tasks.
Our DyNet can seamlessly switch between its bulkier and lightweight variants, thereby offering flexibility for efficient model deployment.
arXiv Detail & Related papers (2024-04-02T17:58:49Z) - ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - FFT-based Selection and Optimization of Statistics for Robust Recognition of Severely Corrupted Images [19.07004663565609]
This paper presents a novel approach to improve robustness of any classification model, especially on severely corrupted images.
Our method (FROST) employs high-frequency features to detect input image corruption type, and select layer-wise feature normalization statistics.
FROST provides the state-of-the-art results for different models and datasets, outperforming competitors on ImageNet-C by up to 37.1% relative gain.
arXiv Detail & Related papers (2024-03-21T12:01:54Z) - Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Uncertainty in AI: Evaluating Deep Neural Networks on
Out-of-Distribution Images [0.0]
This paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with perturbed data.
While ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images.
arXiv Detail & Related papers (2023-09-04T22:46:59Z) - Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness
with Dataset Reinforcement [68.44100784364987]
We propose a strategy to improve a dataset once such that the accuracy of any model architecture trained on the reinforced dataset is improved at no additional training cost for users.
We create a reinforced version of the ImageNet training dataset, called ImageNet+, as well as reinforced datasets CIFAR-100+, Flowers-102+, and Food-101+.
Models trained with ImageNet+ are more accurate, robust, and calibrated, and transfer well to downstream tasks.
arXiv Detail & Related papers (2023-03-15T23:10:17Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Revisiting Batch Normalization for Improving Corruption Robustness [85.20742045853738]
We interpret corruption robustness as a domain shift and propose to rectify batch normalization statistics for improving model robustness.
We find that simply estimating and adapting the BN statistics on a few representation samples, without retraining the model, improves the corruption robustness by a large margin.
arXiv Detail & Related papers (2020-10-07T19:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.