A simple way to make neural networks robust against diverse image
corruptions
- URL: http://arxiv.org/abs/2001.06057v5
- Date: Wed, 22 Jul 2020 12:25:10 GMT
- Title: A simple way to make neural networks robust against diverse image
corruptions
- Authors: Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf,
Oliver Bringmann, Matthias Bethge, Wieland Brendel
- Abstract summary: We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
- Score: 29.225922892332342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human visual system is remarkably robust against a wide range of
naturally occurring variations and corruptions like rain or snow. In contrast,
the performance of modern image recognition models strongly degrades when
evaluated on previously unseen corruptions. Here, we demonstrate that a simple
but properly tuned training with additive Gaussian and Speckle noise
generalizes surprisingly well to unseen corruptions, easily reaching the
previous state of the art on the corruption benchmark ImageNet-C (with
ResNet50) and on MNIST-C. We build on top of these strong baseline results and
show that an adversarial training of the recognition model against uncorrelated
worst-case noise distributions leads to an additional increase in performance.
This regularization can be combined with previously proposed defense methods
for further improvement.
Related papers
- Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Evaluating Similitude and Robustness of Deep Image Denoising Models via
Adversarial Attack [60.40356882897116]
Deep neural networks (DNNs) have shown superior performance compared to traditional image denoising algorithms.
In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models.
arXiv Detail & Related papers (2023-06-28T09:30:59Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - Soft Diffusion: Score Matching for General Corruptions [84.26037497404195]
We propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process.
We show that our objective learns the gradient of the likelihood under suitable regularity conditions for the family of corruption processes.
Our method achieves state-of-the-art FID score $1.85$ on CelebA-64, outperforming all previous linear diffusion models.
arXiv Detail & Related papers (2022-09-12T17:45:03Z) - PRIME: A Few Primitives Can Boost Robustness to Common Corruptions [60.119023683371736]
deep networks have a hard time generalizing to many common corruptions of their data.
We propose PRIME, a general data augmentation scheme that consists of simple families of max-entropy image transformations.
We show that PRIME outperforms the prior art for corruption robustness, while its simplicity and plug-and-play nature enables it to be combined with other methods to further boost their robustness.
arXiv Detail & Related papers (2021-12-27T07:17:51Z) - Benchmarks for Corruption Invariant Person Re-identification [31.919264399996475]
We study corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.
transformer-based models are more robust towards corrupted images, compared with CNN-based models.
Cross-dataset generalization improves with corruption robustness increases.
arXiv Detail & Related papers (2021-11-01T12:14:28Z) - Defending Against Image Corruptions Through Adversarial Augmentations [20.276628138912887]
Modern neural networks excel at image classification, yet they remain vulnerable to common image corruptions.
Recent methods that focus on this problem, such as AugMix and DeepAugment, introduce defenses that operate in expectation over a distribution of image corruptions.
We propose AdrialAugment, a technique which optimize the parameters of image-to-image models to generate adversarially corrupted augmented images.
arXiv Detail & Related papers (2021-04-02T15:16:39Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness [78.6626755563546]
Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
arXiv Detail & Related papers (2021-02-22T18:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.