Investigating the Corruption Robustness of Image Classifiers with Random
Lp-norm Corruptions
- URL: http://arxiv.org/abs/2305.05400v4
- Date: Wed, 10 Jan 2024 08:38:33 GMT
- Title: Investigating the Corruption Robustness of Image Classifiers with Random
Lp-norm Corruptions
- Authors: Georg Siedel, Weijia Shao, Silvia Vock, Andrey Morozov
- Abstract summary: This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers.
We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.
- Score: 3.1337872355726084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robustness is a fundamental property of machine learning classifiers required
to achieve safety and reliability. In the field of adversarial robustness of
image classifiers, robustness is commonly defined as the stability of a model
to all input changes within a p-norm distance. However, in the field of random
corruption robustness, variations observed in the real world are used, while
p-norm corruptions are rarely considered. This study investigates the use of
random p-norm corruptions to augment the training and test data of image
classifiers. We evaluate the model robustness against imperceptible random
p-norm corruptions and propose a novel robustness metric. We empirically
investigate whether robustness transfers across different p-norms and derive
conclusions on which p-norm corruptions a model should be trained and
evaluated. We find that training data augmentation with a combination of p-norm
corruptions significantly improves corruption robustness, even on top of
state-of-the-art data augmentation schemes.
Related papers
- Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Utilizing Class Separation Distance for the Evaluation of Corruption
Robustness of Machine Learning Classifiers [0.6882042556551611]
We propose a test data augmentation method that uses a robustness distance $epsilon$ derived from the datasets minimal class separation distance.
The resulting MSCR metric allows a dataset-specific comparison of different classifiers with respect to their corruption robustness.
Our results indicate that robustness training through simple data augmentation can already slightly improve accuracy.
arXiv Detail & Related papers (2022-06-27T15:56:16Z) - PRIME: A Few Primitives Can Boost Robustness to Common Corruptions [60.119023683371736]
deep networks have a hard time generalizing to many common corruptions of their data.
We propose PRIME, a general data augmentation scheme that consists of simple families of max-entropy image transformations.
We show that PRIME outperforms the prior art for corruption robustness, while its simplicity and plug-and-play nature enables it to be combined with other methods to further boost their robustness.
arXiv Detail & Related papers (2021-12-27T07:17:51Z) - Benchmarks for Corruption Invariant Person Re-identification [31.919264399996475]
We study corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.
transformer-based models are more robust towards corrupted images, compared with CNN-based models.
Cross-dataset generalization improves with corruption robustness increases.
arXiv Detail & Related papers (2021-11-01T12:14:28Z) - Using Synthetic Corruptions to Measure Robustness to Natural
Distribution Shifts [6.445605125467574]
We propose a methodology to build synthetic corruption benchmarks that make robustness estimations more correlated with robustness to real-world distribution shifts.
Applying the proposed methodology, we build a new benchmark called ImageNet-Syn2Nat to predict image classifier robustness.
arXiv Detail & Related papers (2021-07-26T09:20:49Z) - Using the Overlapping Score to Improve Corruption Benchmarks [6.445605125467574]
We propose a metric called corruption overlapping score, which can be used to reveal flaws in corruption benchmarks.
We argue that taking into account overlappings between corruptions can help to improve existing benchmarks or build better ones.
arXiv Detail & Related papers (2021-05-26T06:42:54Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness [78.6626755563546]
Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
arXiv Detail & Related papers (2021-02-22T18:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.