Models Out of Line: A Fourier Lens on Distribution Shift Robustness
- URL: http://arxiv.org/abs/2207.04075v1
- Date: Fri, 8 Jul 2022 18:05:58 GMT
- Title: Models Out of Line: A Fourier Lens on Distribution Shift Robustness
- Authors: Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya
Kailkhura, Peer-Timo Bremer
- Abstract summary: Improving accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications.
Recently, some promising approaches have been developed to improve OOD robustness.
There still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
- Score: 29.12208822285158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving the accuracy of deep neural networks (DNNs) on out-of-distribution
(OOD) data is critical to an acceptance of deep learning (DL) in real world
applications. It has been observed that accuracies on in-distribution (ID)
versus OOD data follow a linear trend and models that outperform this baseline
are exceptionally rare (and referred to as "effectively robust"). Recently,
some promising approaches have been developed to improve OOD robustness: model
pruning, data augmentation, and ensembling or zero-shot evaluating large
pretrained models. However, there still is no clear understanding of the
conditions on OOD data and model properties that are required to observe
effective robustness. We approach this issue by conducting a comprehensive
empirical study of diverse approaches that are known to impact OOD robustness
on a broad range of natural and synthetic distribution shifts of CIFAR-10 and
ImageNet. In particular, we view the "effective robustness puzzle" through a
Fourier lens and ask how spectral properties of both models and OOD data
influence the corresponding effective robustness. We find this Fourier lens
offers some insight into why certain robust models, particularly those from the
CLIP family, achieve OOD robustness. However, our analysis also makes clear
that no known metric is consistently the best explanation (or even a strong
explanation) of OOD robustness. Thus, to aid future research into the OOD
puzzle, we address the gap in publicly-available models with effective
robustness by introducing a set of pretrained models--RobustNets--with varying
levels of OOD robustness.
Related papers
- Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Out-of-Distribution Learning with Human Feedback [26.398598663165636]
This paper presents a novel framework for OOD learning with human feedback.
Our framework capitalizes on the freely available unlabeled data in the wild.
By exploiting human feedback, we enhance the robustness and reliability of machine learning models.
arXiv Detail & Related papers (2024-08-14T18:49:27Z) - Clarifying Myths About the Relationship Between Shape Bias, Accuracy, and Robustness [18.55761892159021]
Deep learning models can perform well when evaluated on images from the same distribution as the training set.
Deep learning models can perform well when evaluated on images from the same distribution as the training set.
Applying small blurrings to a model's input image and feeding the model with out-of-distribution (OOD) data can significantly drop the model's accuracy.
Data augmentation is one of the well-practiced methods to improve model robustness against OOD data.
arXiv Detail & Related papers (2024-06-07T15:21:00Z) - Mitigating Overconfidence in Out-of-Distribution Detection by Capturing Extreme Activations [1.8531577178922987]
"Overconfidence" is an intrinsic property of certain neural network architectures, leading to poor OOD detection.
We measure extreme activation values in the penultimate layer of neural networks and then leverage this proxy of overconfidence to improve on several OOD detection baselines.
Compared to the baselines, our method often grants substantial improvements, with double-digit increases in OOD detection.
arXiv Detail & Related papers (2024-05-21T10:14:50Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Harnessing Out-Of-Distribution Examples via Augmenting Content and Style [93.21258201360484]
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples.
This paper proposes a HOOD method that can leverage the content and style from each image instance to identify benign and malign OOD data.
Thanks to the proposed novel disentanglement and data augmentation techniques, HOOD can effectively deal with OOD examples in unknown and open environments.
arXiv Detail & Related papers (2022-07-07T08:48:59Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.