ExCeL : Combined Extreme and Collective Logit Information for Enhancing
Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2311.14754v1
- Date: Thu, 23 Nov 2023 14:16:03 GMT
- Title: ExCeL : Combined Extreme and Collective Logit Information for Enhancing
Out-of-Distribution Detection
- Authors: Naveen Karunanayake, Suranga Seneviratne, Sanjay Chawla
- Abstract summary: ExCeL combines extreme and collective information within the output layer for enhanced accuracy in OOD detection.
We show that ExCeL consistently is among the five top-performing methods out of twenty-one existing post-hoc baselines.
- Score: 9.689089164964484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models often exhibit overconfidence in predicting
out-of-distribution (OOD) data, underscoring the crucial role of OOD detection
in ensuring reliability in predictions. Among various OOD detection approaches,
post-hoc detectors have gained significant popularity, primarily due to their
ease of use and implementation. However, the effectiveness of most post-hoc OOD
detectors has been constrained as they rely solely either on extreme
information, such as the maximum logit, or on the collective information (i.e.,
information spanned across classes or training samples) embedded within the
output layer. In this paper, we propose ExCeL that combines both extreme and
collective information within the output layer for enhanced accuracy in OOD
detection. We leverage the logit of the top predicted class as the extreme
information (i.e., the maximum logit), while the collective information is
derived in a novel approach that involves assessing the likelihood of other
classes appearing in subsequent ranks across various training samples. Our idea
is motivated by the observation that, for in-distribution (ID) data, the
ranking of classes beyond the predicted class is more deterministic compared to
that in OOD data. Experiments conducted on CIFAR100 and ImageNet-200 datasets
demonstrate that ExCeL consistently is among the five top-performing methods
out of twenty-one existing post-hoc baselines when the joint performance on
near-OOD and far-OOD is considered (i.e., in terms of AUROC and FPR95).
Furthermore, ExCeL shows the best overall performance across both datasets,
unlike other baselines that work best on one dataset but has a performance drop
in the other.
Related papers
- Out-of-Distribution Learning with Human Feedback [26.398598663165636]
This paper presents a novel framework for OOD learning with human feedback.
Our framework capitalizes on the freely available unlabeled data in the wild.
By exploiting human feedback, we enhance the robustness and reliability of machine learning models.
arXiv Detail & Related papers (2024-08-14T18:49:27Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Learning to Augment Distributions for Out-of-Distribution Detection [49.12437300327712]
Open-world classification systems should discern out-of-distribution (OOD) data whose labels deviate from those of in-distribution (ID) cases.
We propose Distributional-Augmented OOD Learning (DAL) to alleviating the OOD distribution discrepancy.
arXiv Detail & Related papers (2023-11-03T09:19:33Z) - Diversified Outlier Exposure for Out-of-Distribution Detection via
Informative Extrapolation [110.34982764201689]
Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications.
Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers.
We propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers.
arXiv Detail & Related papers (2023-10-21T07:16:09Z) - Continual Evidential Deep Learning for Out-of-Distribution Detection [20.846788009755183]
Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions.
Evidential deep learning stands out achieving remarkable performance in detecting out-of-distribution (OOD) data with a single deterministic neural network.
We propose the integration of an evidential deep learning method into a continual learning framework in order to perform simultaneously incremental object classification and OOD detection.
arXiv Detail & Related papers (2023-09-06T13:36:59Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Enhancing Out-of-Distribution Detection in Natural Language
Understanding via Implicit Layer Ensemble [22.643719584452455]
Out-of-distribution (OOD) detection aims to discern outliers from the intended data distribution.
We propose a novel framework based on contrastive learning that encourages intermediate features to learn layer-specialized representations.
Our approach is significantly more effective than other works.
arXiv Detail & Related papers (2022-10-20T06:05:58Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.