Out-of-distribution detection using normalizing flows on the data
manifold
- URL: http://arxiv.org/abs/2308.13792v1
- Date: Sat, 26 Aug 2023 07:35:16 GMT
- Title: Out-of-distribution detection using normalizing flows on the data
manifold
- Authors: Seyedeh Fatemeh Razavi, Mohammad Mahdi Mehmanchi, Reshad Hosseini,
Mostafa Tavassolipour
- Abstract summary: This study investigates the effect of manifold learning using normalizing flows on out-of-distribution detection.
We show that manifold learning improves the out-of-distribution detection ability of a class of likelihood-based models known as normalizing flows.
- Score: 3.725042082196983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common approach for out-of-distribution detection involves estimating an
underlying data distribution, which assigns a lower likelihood value to
out-of-distribution data. Normalizing flows are likelihood-based generative
models providing a tractable density estimation via dimension-preserving
invertible transformations. Conventional normalizing flows are prone to fail in
out-of-distribution detection, because of the well-known curse of
dimensionality problem of the likelihood-based models. According to the
manifold hypothesis, real-world data often lie on a low-dimensional manifold.
This study investigates the effect of manifold learning using normalizing flows
on out-of-distribution detection. We proceed by estimating the density on a
low-dimensional manifold, coupled with measuring the distance from the
manifold, as criteria for out-of-distribution detection. However, individually,
each of them is insufficient for this task. The extensive experimental results
show that manifold learning improves the out-of-distribution detection ability
of a class of likelihood-based models known as normalizing flows. This
improvement is achieved without modifying the model structure or using
auxiliary out-of-distribution data during training.
Related papers
- GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Joint Manifold Learning and Density Estimation Using Normalizing Flows [4.939777212813711]
We introduce two approaches, namely per-pixel penalized log-likelihood and hierarchical training, to answer the question.
We propose a single-step method for joint manifold learning and density estimation by disentangling the transformed space.
Results validate the superiority of the proposed methods in simultaneous manifold learning and density estimation.
arXiv Detail & Related papers (2022-06-07T13:35:14Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Tractable Density Estimation on Learned Manifolds with Conformal
Embedding Flows [0.0]
Normalizing flows provide tractable density estimation by transforming a simple base distribution into a complex target distribution.
Recent attempts to remedy this have introduced geometric complications that defeat a central benefit of normalizing flows: exact density estimation.
We argue that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data.
arXiv Detail & Related papers (2021-06-09T18:00:00Z) - Rectangular Flows for Manifold Learning [38.63646804834534]
Normalizing flows are invertible neural networks with tractable change-of-volume terms.
Data of interest is typically assumed to live in some (often unknown) low-dimensional manifold embedded in high-dimensional ambient space.
We propose two methods to tractably the gradient of this term with respect to the parameters of the model.
arXiv Detail & Related papers (2021-06-02T18:30:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.