Robustness in Deep Learning for Computer Vision: Mind the gap?
- URL: http://arxiv.org/abs/2112.00639v1
- Date: Wed, 1 Dec 2021 16:42:38 GMT
- Title: Robustness in Deep Learning for Computer Vision: Mind the gap?
- Authors: Nathan Drenkow, Numair Sani, Ilya Shpitser, Mathias Unberath
- Abstract summary: We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
- Score: 13.576376492050185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks for computer vision tasks are deployed in increasingly
safety-critical and socially-impactful applications, motivating the need to
close the gap in model performance under varied, naturally occurring imaging
conditions. Robustness, ambiguously used in multiple contexts including
adversarial machine learning, here then refers to preserving model performance
under naturally-induced image corruptions or alterations.
We perform a systematic review to identify, analyze, and summarize current
definitions and progress towards non-adversarial robustness in deep learning
for computer vision. We find that this area of research has received
disproportionately little attention relative to adversarial machine learning,
yet a significant robustness gap exists that often manifests in performance
degradation similar in magnitude to adversarial conditions.
To provide a more transparent definition of robustness across contexts, we
introduce a structural causal model of the data generating process and
interpret non-adversarial robustness as pertaining to a model's behavior on
corrupted images which correspond to low-probability samples from the unaltered
data distribution. We then identify key architecture-, data augmentation-, and
optimization tactics for improving neural network robustness. This causal view
of robustness reveals that common practices in the current literature, both in
regards to robustness tactics and evaluations, correspond to causal concepts,
such as soft interventions resulting in a counterfactually-altered distribution
of imaging conditions. Through our findings and analysis, we offer perspectives
on how future research may mind this evident and significant non-adversarial
robustness gap.
Related papers
- Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - A Survey of Neural Network Robustness Assessment in Image Recognition [4.581878177334397]
In recent years, there has been significant attention given to the robustness assessment of neural networks.
Deep learning's robustness problem is particularly significant, highlighted by the discovery of adversarial attacks on image classification models.
In this survey, we present a detailed examination of both adversarial robustness (AR) and corruption robustness (CR) in neural network assessment.
arXiv Detail & Related papers (2024-04-12T07:19:16Z) - Understanding Robustness of Visual State Space Models for Image Classification [19.629800707546543]
Visual State Space Model (VMamba) has emerged as a promising architecture, exhibiting remarkable performance in various computer vision tasks.
We investigate its robustness to adversarial attacks, employing both whole-image and patch-specific adversarial attacks.
We explore VMamba's gradients and back-propagation during white-box attacks, uncovering unique vulnerabilities and defensive capabilities.
arXiv Detail & Related papers (2024-03-16T14:23:17Z) - Interpretable Computer Vision Models through Adversarial Training:
Unveiling the Robustness-Interpretability Connection [0.0]
Interpretability is as essential as robustness when we deploy the models to the real world.
Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans.
arXiv Detail & Related papers (2023-07-04T13:51:55Z) - A Survey on the Robustness of Computer Vision Models against Common Corruptions [3.6486148851646063]
Computer vision models are susceptible to changes in input images caused by sensor errors or extreme imaging environments.
These corruptions can significantly hinder the reliability of these models when deployed in real-world scenarios.
We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
arXiv Detail & Related papers (2023-05-10T10:19:31Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.