LINe: Out-of-Distribution Detection by Leveraging Important Neurons
- URL: http://arxiv.org/abs/2303.13995v1
- Date: Fri, 24 Mar 2023 13:49:05 GMT
- Title: LINe: Out-of-Distribution Detection by Leveraging Important Neurons
- Authors: Yong Hyun Ahn, Gyeong-Moon Park, Seong Tae Kim
- Abstract summary: We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
- Score: 15.797257361788812
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is important to quantify the uncertainty of input samples, especially in
mission-critical domains such as autonomous driving and healthcare, where
failure predictions on out-of-distribution (OOD) data are likely to cause big
problems. OOD detection problem fundamentally begins in that the model cannot
express what it is not aware of. Post-hoc OOD detection approaches are widely
explored because they do not require an additional re-training process which
might degrade the model's performance and increase the training cost. In this
study, from the perspective of neurons in the deep layer of the model
representing high-level features, we introduce a new aspect for analyzing the
difference in model outputs between in-distribution data and OOD data. We
propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out
of distribution detection.
Shapley value-based pruning reduces the effects of noisy outputs by selecting
only high-contribution neurons for predicting specific classes of input data
and masking the rest. Activation clipping fixes all values above a certain
threshold into the same value, allowing LINe to treat all the class-specific
features equally and just consider the difference between the number of
activated feature differences between in-distribution and OOD data.
Comprehensive experiments verify the effectiveness of the proposed method by
outperforming state-of-the-art post-hoc OOD detection methods on CIFAR-10,
CIFAR-100, and ImageNet datasets.
Related papers
- Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection [24.557227100200215]
Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications.
Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data.
We propose a novel framework, namely, Self-Calibrated Tuning (SCT), to mitigate this problem for effective OOD detection with only the given few-shot ID data.
arXiv Detail & Related papers (2024-11-05T02:29:16Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Advancing Out-of-Distribution Detection through Data Purification and
Dynamic Activation Function Design [12.45245390882047]
We introduce OOD-R (Out-of-Distribution-Rectified), a meticulously curated collection of open-source datasets with enhanced noise reduction properties.
OOD-R incorporates noise filtering technologies to refine the datasets, ensuring a more accurate and reliable evaluation of OOD detection algorithms.
We present ActFun, an innovative method that fine-tunes the model's response to diverse inputs, thereby improving the stability of feature extraction.
arXiv Detail & Related papers (2024-03-06T02:39:22Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Classifier-head Informed Feature Masking and Prototype-based Logit
Smoothing for Out-of-Distribution Detection [27.062465089674763]
Out-of-distribution (OOD) detection is essential when deploying neural networks in the real world.
One main challenge is that neural networks often make overconfident predictions on OOD data.
We propose an effective post-hoc OOD detection method based on a new feature masking strategy and a novel logit smoothing strategy.
arXiv Detail & Related papers (2023-10-27T12:42:17Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.