Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2302.04132v1
- Date: Wed, 8 Feb 2023 15:28:33 GMT
- Title: Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection
- Authors: Lily H. Zhang and Rajesh Ranganath
- Abstract summary: Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs.
We provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them.
- Score: 24.821151013905865
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Methods which utilize the outputs or feature representations of predictive
models have emerged as promising approaches for out-of-distribution (OOD)
detection of image inputs. However, these methods struggle to detect OOD inputs
that share nuisance values (e.g. background) with in-distribution inputs. The
detection of shared-nuisance out-of-distribution (SN-OOD) inputs is
particularly relevant in real-world applications, as anomalies and
in-distribution inputs tend to be captured in the same settings during
deployment. In this work, we provide a possible explanation for SN-OOD
detection failures and propose nuisance-aware OOD detection to address them.
Nuisance-aware OOD detection substitutes a classifier trained via empirical
risk minimization and cross-entropy loss with one that 1. is trained under a
distribution where the nuisance-label relationship is broken and 2. yields
representations that are independent of the nuisance under this distribution,
both marginally and conditioned on the label. We can train a classifier to
achieve these objectives using Nuisance-Randomized Distillation (NuRD), an
algorithm developed for OOD generalization under spurious correlations. Output-
and feature-based nuisance-aware OOD detection perform substantially better
than their original counterparts, succeeding even when detection based on
domain generalization algorithms fails to improve performance.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - Rethinking Out-of-Distribution Detection on Imbalanced Data Distribution [38.844580833635725]
We present a training-time regularization technique to mitigate the bias and boost imbalanced OOD detectors across architecture designs.
Our method translates into consistent improvements on the representative CIFAR10-LT, CIFAR100-LT, and ImageNet-LT benchmarks.
arXiv Detail & Related papers (2024-07-23T12:28:59Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Self-Supervised Anomaly Detection by Self-Distillation and Negative
Sampling [1.304892050913381]
We show that self-distillation of the in-distribution training set together with contrasting against negative examples strongly improves OOD detection.
We observe that by leveraging negative samples, which keep the statistics of low-level features while changing the high-level semantics, higher average detection performance is obtained.
arXiv Detail & Related papers (2022-01-17T12:33:14Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.