Benchmarking Low-Shot Robustness to Natural Distribution Shifts
- URL: http://arxiv.org/abs/2304.11263v2
- Date: Sat, 23 Sep 2023 20:46:19 GMT
- Title: Benchmarking Low-Shot Robustness to Natural Distribution Shifts
- Authors: Aaditya Singh, Kartik Sarangmath, Prithvijit Chattopadhyay, Judy
Hoffman
- Abstract summary: We study robustness to various natural distribution shifts in different low-shot regimes.
There is no single model of choice that is often more robust than others.
Existing interventions can fail to improve robustness on some datasets even if they do so in the full-shot regime.
- Score: 18.84297269860671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robustness to natural distribution shifts has seen remarkable progress thanks
to recent pre-training strategies combined with better fine-tuning methods.
However, such fine-tuning assumes access to large amounts of labelled data, and
the extent to which the observations hold when the amount of training data is
not as high remains unknown. We address this gap by performing the first
in-depth study of robustness to various natural distribution shifts in
different low-shot regimes: spanning datasets, architectures, pre-trained
initializations, and state-of-the-art robustness interventions. Most
importantly, we find that there is no single model of choice that is often more
robust than others, and existing interventions can fail to improve robustness
on some datasets even if they do so in the full-shot regime. We hope that our
work will motivate the community to focus on this problem of practical
importance.
Related papers
- Ask Your Distribution Shift if Pre-Training is Right for You [74.18516460467019]
In practice, fine-tuning a pre-trained model improves robustness significantly in some cases but not at all in others.
We focus on two possible failure modes of models under distribution shift: poor extrapolation and biases in the training data.
Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases.
arXiv Detail & Related papers (2024-02-29T23:46:28Z) - Towards Robust Aspect-based Sentiment Analysis through
Non-counterfactual Augmentations [40.71705332298682]
We present an alternative approach that relies on non-counterfactual data augmentation.
Our approach further establishes a new state-of-the-art on the ABSA robustness benchmark and transfers well across domains.
arXiv Detail & Related papers (2023-06-24T13:57:32Z) - Non-adversarial Robustness of Deep Learning Methods for Computer Vision [0.0]
Non-adversarial robustness, also known as natural robustness, is a property of deep learning models.
We present a brief overview of the most recent techniques for improving the robustness of computer vision methods.
arXiv Detail & Related papers (2023-05-24T10:21:31Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - On the contribution of pre-trained models to accuracy and utility in
modeling distributed energy resources [0.0]
We evaluate the improvement in predictive accuracy due to pre-trained models, both with and without fine-tuning.
We consider the question of fairness: do pre-trained models create equal improvements for heterogeneous agents, and how does this translate to downstream utility?
arXiv Detail & Related papers (2023-02-22T22:29:40Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
Generalization [64.61630743818024]
We introduce four new real-world distribution shift datasets consisting of changes in image style, image blurriness, geographic location, camera operation, and more.
We find that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.
We also introduce a new data augmentation method which advances the state-of-the-art and outperforms models pretrained with 1000 times more labeled data.
arXiv Detail & Related papers (2020-06-29T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.