InFlow: Robust outlier detection utilizing Normalizing Flows
- URL: http://arxiv.org/abs/2106.12894v1
- Date: Thu, 10 Jun 2021 08:42:50 GMT
- Title: InFlow: Robust outlier detection utilizing Normalizing Flows
- Authors: Nishant Kumar, Pia Hanfeld, Michael Hecht, Michael Bussmann, Stefan
Gumhold and Nico Hoffmannn
- Abstract summary: We show that normalizing flows can reliably detect outliers including adversarial attacks.
Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection.
- Score: 7.309919829856283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normalizing flows are prominent deep generative models that provide tractable
probability distributions and efficient density estimation. However, they are
well known to fail while detecting Out-of-Distribution (OOD) inputs as they
directly encode the local features of the input representations in their latent
space. In this paper, we solve this overconfidence issue of normalizing flows
by demonstrating that flows, if extended by an attention mechanism, can
reliably detect outliers including adversarial attacks. Our approach does not
require outlier data for training and we showcase the efficiency of our method
for OOD detection by reporting state-of-the-art performance in diverse
experimental settings. Code available at
https://github.com/ComputationalRadiationPhysics/InFlow .
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Feature Density Estimation for Out-of-Distribution Detection via Normalizing Flows [7.91363551513361]
Out-of-distribution (OOD) detection is a critical task for safe deployment of learning systems in the open world setting.
We present a fully unsupervised approach which requires no exposure to OOD data, avoiding researcher bias in OOD sample selection.
This is a post-hoc method which can be applied to any pretrained model, and involves training a lightweight auxiliary normalizing flow model to perform the out-of-distribution detection via density thresholding.
arXiv Detail & Related papers (2024-02-09T16:51:01Z) - Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection [24.821151013905865]
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs.
We provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them.
arXiv Detail & Related papers (2023-02-08T15:28:33Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Positive Difference Distribution for Image Outlier Detection using
Normalizing Flows and Contrastive Data [2.9005223064604078]
Likelihoods learned by a generative model, e.g., a normalizing flow via standard log-likelihood training, perform poorly as an outlier score.
We propose to use an unlabelled auxiliary dataset and a probabilistic outlier score for outlier detection.
We show that this is equivalent to learning the normalized positive difference between the in-distribution and the contrastive feature density.
arXiv Detail & Related papers (2022-08-30T07:00:46Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - FastFlow: Unsupervised Anomaly Detection and Localization via 2D
Normalizing Flows [18.062328700407726]
We propose FastFlow as a plug-in module for arbitrary deep feature extractors such as ResNet and vision transformer.
In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase.
Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency.
arXiv Detail & Related papers (2021-11-15T11:15:02Z) - Efficient remedies for outlier detection with variational autoencoders [8.80692072928023]
Likelihoods computed by deep generative models are a candidate metric for outlier detection with unlabeled data.
We show that a theoretically-grounded correction readily ameliorates a key bias with VAE likelihood estimates.
We also show that the variance of the likelihoods computed over an ensemble of VAEs also enables robust outlier detection.
arXiv Detail & Related papers (2021-08-19T16:00:58Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.