A Survey on the Robustness of Computer Vision Models against Common
Corruptions
- URL: http://arxiv.org/abs/2305.06024v3
- Date: Mon, 11 Mar 2024 10:51:03 GMT
- Title: A Survey on the Robustness of Computer Vision Models against Common
Corruptions
- Authors: Shunxin Wang, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio
- Abstract summary: We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
We release a unified benchmark framework to compare robustness performance on several datasets.
- Score: 3.9858496473361402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of computer vision models are susceptible to unexpected
changes in input images, known as common corruptions (e.g. noise, blur,
illumination changes, etc.), that can hinder their reliability when deployed in
real scenarios. These corruptions are not always considered to test model
generalization and robustness. In this survey, we present a comprehensive
overview of methods that improve the robustness of computer vision models
against common corruptions. We categorize methods into four groups based on the
model part and training method addressed: data augmentation, representation
learning, knowledge distillation, and network components. We also cover
indirect methods for generalization and mitigation of shortcut learning,
potentially useful for corruption robustness. We release a unified benchmark
framework to compare robustness performance on several datasets, and address
the inconsistencies of evaluation in the literature. We provide an experimental
overview of the base corruption robustness of popular vision backbones, and
show that corruption robustness does not necessarily scale with model size. The
very large models (above 100M parameters) gain negligible robustness,
considering the increased computational requirements. To achieve generalizable
and robust computer vision models, we foresee the need of developing new
learning strategies to efficiently exploit limited data and mitigate unwanted
or unreliable learning behaviors.
Related papers
- Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models [58.98742597810023]
Vision models have to behave in a robust way to disturbances such as noise or blur.
This paper studies two datasets of blur corruptions, which we denote OpticsBench and LensCorruptions.
Evaluations for image classification and object detection on ImageNet and MSCOCO show that for a variety of different pre-trained models, the performance on OpticsBench and LensCorruptions varies significantly.
arXiv Detail & Related papers (2025-04-25T17:23:47Z) - PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions [57.871692507044344]
Pose estimation aims to accurately identify anatomical keypoints in humans and animals using monocular images.
Current models are typically trained and tested on clean data, potentially overlooking the corruption during real-world deployment.
We introduce PoseBench, a benchmark designed to evaluate the robustness of pose estimation models against real-world corruption.
arXiv Detail & Related papers (2024-06-20T14:40:17Z) - RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in
Object-centric Learning [9.308581290987783]
We present the RobustCLEVR benchmark dataset and evaluation framework.
Our framework takes a novel approach to evaluating robustness by enabling the specification of causal dependencies.
Overall, we find that object-centric methods are not inherently robust to image corruptions.
arXiv Detail & Related papers (2023-08-28T20:52:18Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Benchmarking the Robustness of Spatial-Temporal Models Against
Corruptions [32.821121530785504]
We establish a corruption robustness benchmark, Mini Kinetics-C and Mini SSV2-C, which considers temporal corruptions beyond spatial corruptions in images.
We make the first attempt to conduct an exhaustive study on the corruption robustness of established CNN-based and Transformer-based spatial-temporal models.
arXiv Detail & Related papers (2021-10-13T05:59:39Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.