Non-adversarial Robustness of Deep Learning Methods for Computer Vision
- URL: http://arxiv.org/abs/2305.14986v1
- Date: Wed, 24 May 2023 10:21:31 GMT
- Title: Non-adversarial Robustness of Deep Learning Methods for Computer Vision
- Authors: Gorana Goji\'c, Vladimir Vincan, Ognjen Kunda\v{c}ina, Dragi\v{s}a
Mi\v{s}kovi\'c and Dinu Dragan
- Abstract summary: Non-adversarial robustness, also known as natural robustness, is a property of deep learning models.
We present a brief overview of the most recent techniques for improving the robustness of computer vision methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-adversarial robustness, also known as natural robustness, is a property
of deep learning models that enables them to maintain performance even when
faced with distribution shifts caused by natural variations in data. However,
achieving this property is challenging because it is difficult to predict in
advance the types of distribution shifts that may occur. To address this
challenge, researchers have proposed various approaches, some of which
anticipate potential distribution shifts, while others utilize knowledge about
the shifts that have already occurred to enhance model generalizability. In
this paper, we present a brief overview of the most recent techniques for
improving the robustness of computer vision methods, as well as a summary of
commonly used robustness benchmark datasets for evaluating the model's
performance under data distribution shifts. Finally, we examine the strengths
and limitations of the approaches reviewed and identify general trends in deep
learning robustness improvement for computer vision.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Benchmarking Low-Shot Robustness to Natural Distribution Shifts [18.84297269860671]
We study robustness to various natural distribution shifts in different low-shot regimes.
There is no single model of choice that is often more robust than others.
Existing interventions can fail to improve robustness on some datasets even if they do so in the full-shot regime.
arXiv Detail & Related papers (2023-04-21T22:09:42Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - A Comprehensive Review of Trends, Applications and Challenges In
Out-of-Distribution Detection [0.76146285961466]
Field of study has emerged, focusing on detecting out-of-distribution data subsets and enabling a more comprehensive generalization.
As many deep learning based models have achieved near-perfect results on benchmark datasets, the need to evaluate these models' reliability and trustworthiness is felt more strongly than ever.
This paper presents a survey that, in addition to reviewing more than 70 papers in this field, presents challenges and directions for future works and offers a unifying look into various types of data shifts and solutions for better generalization.
arXiv Detail & Related papers (2022-09-26T18:13:14Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
Generalization [64.61630743818024]
We introduce four new real-world distribution shift datasets consisting of changes in image style, image blurriness, geographic location, camera operation, and more.
We find that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.
We also introduce a new data augmentation method which advances the state-of-the-art and outperforms models pretrained with 1000 times more labeled data.
arXiv Detail & Related papers (2020-06-29T17:59:10Z) - Adversarial-based neural networks for affect estimations in the wild [3.3335236123901995]
In this work, we explore the use of latent features through our proposed adversarial-based networks for recognition in the wild.
Specifically, our models operate by aggregating several modalities to our discriminator, which is further conditioned to the extracted latent features by the generator.
Our experiments on the recently released SEWA dataset suggest the progressive improvements of our results.
arXiv Detail & Related papers (2020-02-03T16:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.