Feature Density Estimation for Out-of-Distribution Detection via Normalizing Flows
- URL: http://arxiv.org/abs/2402.06537v2
- Date: Tue, 30 Apr 2024 03:44:13 GMT
- Title: Feature Density Estimation for Out-of-Distribution Detection via Normalizing Flows
- Authors: Evan D. Cook, Marc-Antoine Lavoie, Steven L. Waslander,
- Abstract summary: Out-of-distribution (OOD) detection is a critical task for safe deployment of learning systems in the open world setting.
We present a fully unsupervised approach which requires no exposure to OOD data, avoiding researcher bias in OOD sample selection.
This is a post-hoc method which can be applied to any pretrained model, and involves training a lightweight auxiliary normalizing flow model to perform the out-of-distribution detection via density thresholding.
- Score: 7.91363551513361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is a critical task for safe deployment of learning systems in the open world setting. In this work, we investigate the use of feature density estimation via normalizing flows for OOD detection and present a fully unsupervised approach which requires no exposure to OOD data, avoiding researcher bias in OOD sample selection. This is a post-hoc method which can be applied to any pretrained model, and involves training a lightweight auxiliary normalizing flow model to perform the out-of-distribution detection via density thresholding. Experiments on OOD detection in image classification show strong results for far-OOD data detection with only a single epoch of flow training, including 98.2% AUROC for ImageNet-1k vs. Textures, which exceeds the state of the art by 7.8%. We additionally explore the connection between the feature space distribution of the pretrained model and the performance of our method. Finally, we provide insights into training pitfalls that have plagued normalizing flows for use in OOD detection.
Related papers
- FlowCon: Out-of-Distribution Detection using Flow-Based Contrastive Learning [0.0]
We introduce textitFlowCon, a new density-based OOD detection technique.
Our main innovation lies in efficiently combining the properties of normalizing flow with supervised contrastive learning.
Empirical evaluation shows the enhanced performance of our method across common vision datasets.
arXiv Detail & Related papers (2024-07-03T20:33:56Z) - Out-of-Distribution Detection with a Single Unconditional Diffusion Model [54.15132801131365]
Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
arXiv Detail & Related papers (2024-05-20T08:54:03Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Topology-Matching Normalizing Flows for Out-of-Distribution Detection in
Robot Learning [38.97407602443256]
A powerful approach for Out-of-Distribution (OOD) detection is based on density estimation with Normalizing Flows (NFs)
In this work, we circumvent this topological mismatch using an expressive class-conditional base distribution trained with an information-theoretic objective to match the required topology.
We demonstrate superior results in density estimation and 2D object detection benchmarks in comparison with extensive baselines.
arXiv Detail & Related papers (2023-11-11T05:09:31Z) - Scaling for Training Time and Post-hoc Out-of-distribution Detection
Enhancement [41.650761556671775]
In this paper, we offer insights and analyses of recent state-of-the-art out-of-distribution (OOD) detection methods.
We demonstrate that activation pruning has a detrimental effect on OOD detection, while activation scaling enhances it.
We achieve AUROC scores of +1.85% for near-OOD and +0.74% for far-OOD datasets on the OpenOOD v1.5 ImageNet-1K benchmark.
arXiv Detail & Related papers (2023-09-30T02:10:54Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - InFlow: Robust outlier detection utilizing Normalizing Flows [7.309919829856283]
We show that normalizing flows can reliably detect outliers including adversarial attacks.
Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection.
arXiv Detail & Related papers (2021-06-10T08:42:50Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.