Why Normalizing Flows Fail to Detect Out-of-Distribution Data
- URL: http://arxiv.org/abs/2006.08545v1
- Date: Mon, 15 Jun 2020 17:00:01 GMT
- Title: Why Normalizing Flows Fail to Detect Out-of-Distribution Data
- Authors: Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
- Abstract summary: Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
- Score: 51.552870594221865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) data is crucial for robust machine
learning systems. Normalizing flows are flexible deep generative models that
often surprisingly fail to distinguish between in- and out-of-distribution
data: a flow trained on pictures of clothing assigns higher likelihood to
handwritten digits. We investigate why normalizing flows perform poorly for OOD
detection. We demonstrate that flows learn local pixel correlations and generic
image-to-latent-space transformations which are not specific to the target
image dataset. We show that by modifying the architecture of flow coupling
layers we can bias the flow towards learning the semantic structure of the
target data, improving OOD detection. Our investigation reveals that properties
that enable flows to generate high-fidelity images can have a detrimental
effect on OOD detection.
Related papers
- Can Your Generative Model Detect Out-of-Distribution Covariate Shift? [2.0144831048903566]
We propose a novel method for detecting Out-of-Distribution (OOD) sensory data using conditional Normalizing Flows (cNFs)
Our results on CIFAR10 vs. CIFAR10-C and ImageNet200 vs. ImageNet200-C demonstrate the effectiveness of the method.
arXiv Detail & Related papers (2024-09-04T19:27:56Z) - Exploiting Diffusion Prior for Out-of-Distribution Detection [11.11093497717038]
Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.
We present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP.
arXiv Detail & Related papers (2024-06-16T23:55:25Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - InFlow: Robust outlier detection utilizing Normalizing Flows [7.309919829856283]
We show that normalizing flows can reliably detect outliers including adversarial attacks.
Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection.
arXiv Detail & Related papers (2021-06-10T08:42:50Z) - Out-of-Distribution Detection of Melanoma using Normalizing Flows [0.0]
We focus on exploring the data distribution modelling for Out-of-Distribution (OOD) detection.
Using one of the state-of-the-art NF models, GLOW, we attempt to detect OOD examples in the ISIC dataset.
We propose several ideas for improvement such as controlling frequency components, using different wavelets and using other state-of-the-art NF architectures.
arXiv Detail & Related papers (2021-03-23T16:47:19Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Same Same But DifferNet: Semi-Supervised Defect Detection with
Normalizing Flows [24.734388664558708]
We propose DifferNet: It leverages the descriptiveness of features extracted by convolutional neural networks to estimate their density.
Based on these likelihoods we develop a scoring function that indicates defects.
We demonstrate the superior performance over existing approaches on the challenging and newly proposed MVTec AD and Magnetic Tile Defects datasets.
arXiv Detail & Related papers (2020-08-28T10:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.