Out-of-Distribution Detection & Applications With Ablated Learned
Temperature Energy
- URL: http://arxiv.org/abs/2401.12129v1
- Date: Mon, 22 Jan 2024 17:11:01 GMT
- Title: Out-of-Distribution Detection & Applications With Ablated Learned
Temperature Energy
- Authors: Will LeVine, Benjamin Pikus, Jacob Phillips, Berk Norman, Fernando
Amat Gil, Sean Hendryx
- Abstract summary: We introduce Ablated Learned Temperature Energy (or "AbeT" for short)
AbeT lowers the False Positive Rate at $95%$ True Positive Rate (FPR@95) by $35.39%$ in classification.
We additionally provide empirical insights as to how our model learns to distinguish between In-Distribution (ID) and Out-of-Distribution (OOD) samples.
- Score: 40.02298833349518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep neural networks become adopted in high-stakes domains, it is crucial
to be able to identify when inference inputs are Out-of-Distribution (OOD) so
that users can be alerted of likely drops in performance and calibration
despite high confidence. Among many others, existing methods use the following
two scores to do so without training on any apriori OOD examples: a learned
temperature and an energy score. In this paper we introduce Ablated Learned
Temperature Energy (or "AbeT" for short), a method which combines these prior
methods in novel ways with effective modifications. Due to these contributions,
AbeT lowers the False Positive Rate at $95\%$ True Positive Rate (FPR@95) by
$35.39\%$ in classification (averaged across all ID and OOD datasets measured)
compared to state of the art without training networks in multiple stages or
requiring hyperparameters or test-time backward passes. We additionally provide
empirical insights as to how our model learns to distinguish between
In-Distribution (ID) and OOD samples while only being explicitly trained on ID
samples via exposure to misclassified ID examples at training time. Lastly, we
show the efficacy of our method in identifying predicted bounding boxes and
pixels corresponding to OOD objects in object detection and semantic
segmentation, respectively - with an AUROC increase of $5.15\%$ in object
detection and both a decrease in FPR@95 of $41.48\%$ and an increase in AUPRC
of $34.20\%$ on average in semantic segmentation compared to previous state of
the art.
Related papers
- AdaSCALE: Adaptive Scaling for OOD Detection [0.0]
Out-of-distribution (OOD) detection methods leverage activation shaping to improve the separation between in-distribution (ID) and OOD inputs.
We propose textbfAdaSCALE, an adaptive scaling procedure that dynamically adjusts the percentile threshold based on a sample's estimated OOD likelihood.
Our approach achieves state-of-the-art OOD detection performance, outperforming the latest rival OptFS by 14.94 in near-OOD and 21.67 in far-OOD in average FPR@95 metric on the ImageNet-1k benchmark.
arXiv Detail & Related papers (2025-03-11T04:10:06Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Out-of-distribution Object Detection through Bayesian Uncertainty
Estimation [10.985423935142832]
We propose a novel, intuitive, and scalable probabilistic object detection method for OOD detection.
Our method is able to distinguish between in-distribution (ID) data and OOD data via weight parameter sampling from proposed Gaussian distributions.
We demonstrate that our Bayesian object detector can achieve satisfactory OOD identification performance by reducing the FPR95 score by up to 8.19% and increasing the AUROC score by up to 13.94% when trained on BDD100k and VOC datasets.
arXiv Detail & Related papers (2023-10-29T19:10:52Z) - HAct: Out-of-Distribution Detection with Neural Net Activation
Histograms [7.795929277007233]
We propose a novel descriptor, HAct, for OOD detection, that is, probability distributions (approximated by histograms) of output values of neural network layers under the influence of incoming data.
We demonstrate that HAct is significantly more accurate than state-of-the-art in OOD detection on multiple image classification benchmarks.
arXiv Detail & Related papers (2023-09-09T16:22:18Z) - Diffusion Denoised Smoothing for Certified and Adversarial Robust
Out-Of-Distribution Detection [6.247268652296234]
We present a novel approach for certifying the robustness of OOD detection within a $ell$-norm around the input.
We improve current techniques for detecting adversarial attacks on OOD samples, while providing high levels of certified and adversarial robustness on in-distribution samples.
arXiv Detail & Related papers (2023-03-27T07:52:58Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection [72.35532598131176]
We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate.
We leverage a recent insight about label smoothing, which we call the emphLabel Smoothed Embedding Hypothesis
We show that our proposal outperforms many OOD baselines and also provide new finite-sample high-probability statistical results.
arXiv Detail & Related papers (2021-02-09T21:04:44Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.