Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions
- URL: http://arxiv.org/abs/2306.07178v1
- Date: Mon, 12 Jun 2023 15:19:13 GMT
- Title: Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions
- Authors: Harshitha Machiraju, Michael H. Herzog, Pascal Frossard
- Abstract summary: We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
- Score: 48.34142457385199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models often face challenges when handling real-world image
corruptions. In response, researchers have developed image corruption datasets
to evaluate the performance of deep neural networks in handling such
corruptions. However, these datasets have a significant limitation: they do not
account for all corruptions encountered in real-life scenarios. To address this
gap, we present MUFIA (Multiplicative Filter Attack), an algorithm designed to
identify the specific types of corruptions that can cause models to fail. Our
algorithm identifies the combination of image frequency components that render
a model susceptible to misclassification while preserving the semantic
similarity to the original image. We find that even state-of-the-art models
trained to be robust against known common corruptions struggle against the low
visibility-based corruptions crafted by MUFIA. This highlights the need for
more comprehensive approaches to enhance model robustness against a wider range
of real-world image corruptions.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in
Object-centric Learning [9.308581290987783]
We present the RobustCLEVR benchmark dataset and evaluation framework.
Our framework takes a novel approach to evaluating robustness by enabling the specification of causal dependencies.
Overall, we find that object-centric methods are not inherently robust to image corruptions.
arXiv Detail & Related papers (2023-08-28T20:52:18Z) - Hierarchical Contrastive Learning for Pattern-Generalizable Image
Corruption Detection [40.04083743934034]
We develop a hierarchical contrastive learning framework to detect corrupted regions.
A specialized hierarchical interaction mechanism is designed to facilitate the knowledge of contrastive learning in different scales.
Our model has well generalization ability across different corruption patterns.
arXiv Detail & Related papers (2023-08-27T10:03:48Z) - A Survey on the Robustness of Computer Vision Models against Common Corruptions [3.6486148851646063]
Computer vision models are susceptible to changes in input images caused by sensor errors or extreme imaging environments.
These corruptions can significantly hinder the reliability of these models when deployed in real-world scenarios.
We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
arXiv Detail & Related papers (2023-05-10T10:19:31Z) - Soft Diffusion: Score Matching for General Corruptions [84.26037497404195]
We propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process.
We show that our objective learns the gradient of the likelihood under suitable regularity conditions for the family of corruption processes.
Our method achieves state-of-the-art FID score $1.85$ on CelebA-64, outperforming all previous linear diffusion models.
arXiv Detail & Related papers (2022-09-12T17:45:03Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness [78.6626755563546]
Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
arXiv Detail & Related papers (2021-02-22T18:58:39Z) - A simple way to make neural networks robust against diverse image
corruptions [29.225922892332342]
We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
arXiv Detail & Related papers (2020-01-16T20:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.