Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features
- URL: http://arxiv.org/abs/2012.04250v1
- Date: Tue, 8 Dec 2020 07:07:11 GMT
- Title: Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features
- Authors: Ibrahima Ndiour, Nilesh Ahuja, Omesh Tickoo
- Abstract summary: This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
- Score: 7.219077740523682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a principled approach for detecting out-of-distribution
(OOD) samples in deep neural networks (DNN). Modeling probability distributions
on deep features has recently emerged as an effective, yet computationally
cheap method to detect OOD samples in DNN. However, the features produced by a
DNN at any given layer do not fully occupy the corresponding high-dimensional
feature space. We apply linear statistical dimensionality reduction techniques
and nonlinear manifold-learning techniques on the high-dimensional features in
order to capture the true subspace spanned by the features. We hypothesize that
such lower-dimensional feature embeddings can mitigate the curse of
dimensionality, and enhance any feature-based method for more efficient and
effective performance. In the context of uncertainty estimation and OOD, we
show that the log-likelihood score obtained from the distributions learnt on
this lower-dimensional subspace is more discriminative for OOD detection. We
also show that the feature reconstruction error, which is the $L_2$-norm of the
difference between the original feature and the pre-image of its embedding, is
highly effective for OOD detection and in some cases superior to the
log-likelihood scores. The benefits of our approach are demonstrated on image
features by detecting OOD images, using popular DNN architectures on commonly
used image datasets such as CIFAR10, CIFAR100, and SVHN.
Related papers
- Dimensionality-induced information loss of outliers in deep neural networks [29.15751143793406]
Out-of-distribution (OOD) detection is a critical issue for systems using a deep neural network (DNN)
We experimentally clarify this issue by investigating the layer dependence of feature representations from multiple perspectives.
We propose a dimensionality-aware OOD detection method based on alignment of features and weights.
arXiv Detail & Related papers (2024-10-29T01:52:46Z) - Exploiting Diffusion Prior for Out-of-Distribution Detection [11.11093497717038]
Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.
We present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP.
arXiv Detail & Related papers (2024-06-16T23:55:25Z) - GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds [12.727088216619386]
Out-of-distribution detection in deep neural networks (DNNs) can pose risks in real-world deployments.
We introduce GRadient-aware Out-Of-Distribution detection in.
internative manifold (GROOD), a novel framework that relies on the discriminative power of gradient space.
We show that GROD surpasses the established robustness of state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - Subspace Modeling for Fast Out-Of-Distribution and Anomaly Detection [5.672132510411465]
This paper presents a principled approach for detecting anomalous and out-of-distribution (OOD) samples in deep neural networks (DNN)
We propose the application of linear statistical dimensionality reduction techniques on the semantic features produced by a DNN.
We show that the "feature reconstruction error" (FRE), which is the $ell$-norm of the difference between the original feature in the high-dimensional space and the pre-image of its low-dimensional reduced embedding, is highly effective for OOD and anomaly detection.
arXiv Detail & Related papers (2022-03-20T00:55:20Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.