On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness
- URL: http://arxiv.org/abs/2102.11273v1
- Date: Mon, 22 Feb 2021 18:58:39 GMT
- Title: On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness
- Authors: Eric Mintun, Alexander Kirillov, and Saining Xie
- Abstract summary: Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
- Score: 78.6626755563546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Invariance to a broad array of image corruptions, such as warping, noise, or
color shifts, is an important aspect of building robust models in computer
vision. Recently, several new data augmentations have been proposed that
significantly improve performance on ImageNet-C, a benchmark of such
corruptions. However, there is still a lack of basic understanding on the
relationship between data augmentations and test-time corruptions. To this end,
we develop a feature space for image transforms, and then use a new measure in
this space between augmentations and corruptions called the Minimal Sample
Distance to demonstrate there is a strong correlation between similarity and
performance. We then investigate recent data augmentations and observe a
significant degradation in corruption robustness when the test-time corruptions
are sampled to be perceptually dissimilar from ImageNet-C in this feature
space. Our results suggest that test error can be improved by training on
perceptually similar augmentations, and data augmentations may not generalize
well beyond the existing benchmark. We hope our results and tools will allow
for more robust progress towards improving robustness to image corruptions.
Related papers
- Assessing Visually-Continuous Corruption Robustness of Neural Networks
Relative to Human Performance [6.254768374567899]
Neural Networks (NNs) have surpassed human accuracy in image classification on ImageNet.
NNs often lack robustness against image corruption, i.e., corruption robustness.
We propose visually-continuous corruption robustness (VCR) to allow assessing it over the wide and continuous range of changes that correspond to the human perceptive quality.
arXiv Detail & Related papers (2024-02-29T18:00:27Z) - Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - A Survey on the Robustness of Computer Vision Models against Common Corruptions [3.6486148851646063]
Computer vision models are susceptible to changes in input images caused by sensor errors or extreme imaging environments.
These corruptions can significantly hinder the reliability of these models when deployed in real-world scenarios.
We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
arXiv Detail & Related papers (2023-05-10T10:19:31Z) - PRIME: A Few Primitives Can Boost Robustness to Common Corruptions [60.119023683371736]
deep networks have a hard time generalizing to many common corruptions of their data.
We propose PRIME, a general data augmentation scheme that consists of simple families of max-entropy image transformations.
We show that PRIME outperforms the prior art for corruption robustness, while its simplicity and plug-and-play nature enables it to be combined with other methods to further boost their robustness.
arXiv Detail & Related papers (2021-12-27T07:17:51Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Revisiting Batch Normalization for Improving Corruption Robustness [85.20742045853738]
We interpret corruption robustness as a domain shift and propose to rectify batch normalization statistics for improving model robustness.
We find that simply estimating and adapting the BN statistics on a few representation samples, without retraining the model, improves the corruption robustness by a large margin.
arXiv Detail & Related papers (2020-10-07T19:56:47Z) - A simple way to make neural networks robust against diverse image
corruptions [29.225922892332342]
We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
arXiv Detail & Related papers (2020-01-16T20:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.