Block Selection Method for Using Feature Norm in Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2212.02295v1
- Date: Mon, 5 Dec 2022 14:19:21 GMT
- Title: Block Selection Method for Using Feature Norm in Out-of-distribution
Detection
- Authors: Yeonguk Yu, Sungho Shin, Seongju Lee, Changhyun Jun, Kyoobin Lee
- Abstract summary: We propose a framework consisting of FeatureNorm: a norm of the feature map and NormRatio: a ratio of FeatureNorm for ID and OOD.
In particular, to select the block that provides the largest difference between FeatureNorm of ID and FeatureNorm of OOD, we create Jigsaw puzzle images as pseudo OOD from ID training samples and calculate NormRatio.
After the suitable block is selected, OOD detection with the FeatureNorm outperforms other OOD detection methods by reducing FPR95 by up to 52.77% on CIFAR10 benchmark and by up to 48.53%
- Score: 5.486046841722322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting out-of-distribution (OOD) inputs during the inference stage is
crucial for deploying neural networks in the real world. Previous methods
commonly relied on the output of a network derived from the highly activated
feature map. In this study, we first revealed that a norm of the feature map
obtained from the other block than the last block can be a better indicator of
OOD detection. Motivated by this, we propose a simple framework consisting of
FeatureNorm: a norm of the feature map and NormRatio: a ratio of FeatureNorm
for ID and OOD to measure the OOD detection performance of each block. In
particular, to select the block that provides the largest difference between
FeatureNorm of ID and FeatureNorm of OOD, we create Jigsaw puzzle images as
pseudo OOD from ID training samples and calculate NormRatio, and the block with
the largest value is selected. After the suitable block is selected, OOD
detection with the FeatureNorm outperforms other OOD detection methods by
reducing FPR95 by up to 52.77% on CIFAR10 benchmark and by up to 48.53% on
ImageNet benchmark. We demonstrate that our framework can generalize to various
architectures and the importance of block selection, which can improve previous
OOD detection methods as well.
Related papers
- The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - Look Around and Find Out: OOD Detection with Relative Angles [24.369626931550794]
We propose a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure.
Our method achieves state-of-the-art performance on CIFAR-10 and ImageNet benchmarks, reducing FPR95 by 0.88% and 7.74% respectively.
arXiv Detail & Related papers (2024-10-06T15:36:07Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Classifier-head Informed Feature Masking and Prototype-based Logit
Smoothing for Out-of-Distribution Detection [27.062465089674763]
Out-of-distribution (OOD) detection is essential when deploying neural networks in the real world.
One main challenge is that neural networks often make overconfident predictions on OOD data.
We propose an effective post-hoc OOD detection method based on a new feature masking strategy and a novel logit smoothing strategy.
arXiv Detail & Related papers (2023-10-27T12:42:17Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Boosting Out-of-distribution Detection with Typical Features [22.987563801433595]
Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios.
We propose to rectify the feature into its typical set and calculate the OOD score with the typical features to achieve reliable uncertainty estimation.
We evaluate the superiority of our method on both the commonly used benchmark (CIFAR) and the more challenging high-resolution benchmark with large label space (ImageNet)
arXiv Detail & Related papers (2022-10-09T08:44:22Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.