Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data
- URL: http://arxiv.org/abs/2207.07506v1
- Date: Fri, 15 Jul 2022 14:39:57 GMT
- Title: Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data
- Authors: Guoxuan Xia and Christos-Savvas Bouganis
- Abstract summary: We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
- Score: 7.221206118679026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) data is a task that is receiving an
increasing amount of research attention in the domain of deep learning for
computer vision. However, the performance of detection methods is generally
evaluated on the task in isolation, rather than also considering potential
downstream tasks in tandem. In this work, we examine selective classification
in the presence of OOD data (SCOD). That is to say, the motivation for
detecting OOD samples is to reject them so their impact on the quality of
predictions is reduced. We show under this task specification, that existing
post-hoc methods perform quite differently compared to when evaluated only on
OOD detection. This is because it is no longer an issue to conflate
in-distribution (ID) data with OOD data if the ID data is going to be
misclassified. However, the conflation within ID data of correct and incorrect
predictions becomes undesirable. We also propose a novel method for SCOD,
Softmax Information Retaining Combination (SIRC), that augments softmax-based
confidence scores with feature-agnostic information such that their ability to
identify OOD samples is improved without sacrificing separation between correct
and incorrect ID predictions. Experiments on a wide variety of ImageNet-scale
datasets and convolutional neural network architectures show that SIRC is able
to consistently match or outperform the baseline for SCOD, whilst existing OOD
detection methods fail to do so.
Related papers
- Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection [24.557227100200215]
Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications.
Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data.
We propose a novel framework, namely, Self-Calibrated Tuning (SCT), to mitigate this problem for effective OOD detection with only the given few-shot ID data.
arXiv Detail & Related papers (2024-11-05T02:29:16Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - Long-Tailed Out-of-Distribution Detection: Prioritizing Attention to Tail [21.339310734169665]
We introduce a novel Prioritizing Attention to Tail (PATT) method using augmentation instead of reduction.
Our main intuition involves using a mixture of von Mises-Fisher (vMF) distributions to model the ID data and a temperature scaling module to boost the confidence of ID data.
Our method outperforms the current state-of-the-art methods on various benchmarks.
arXiv Detail & Related papers (2024-08-13T09:03:00Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.