Flow Based Models For Manifold Data
- URL: http://arxiv.org/abs/2109.14216v1
- Date: Wed, 29 Sep 2021 06:48:01 GMT
- Title: Flow Based Models For Manifold Data
- Authors: Mingtian Zhang and Yitong Sun and Steven McDonagh and Chen Zhang
- Abstract summary: Flow-based generative models typically define a latent space with dimensionality identical to the observational space.
In many problems, the data does not populate the full ambient data-space that they reside in, rather a lower-dimensional manifold.
We propose to learn a manifold prior that affords benefits to both sample generation and representation quality.
- Score: 11.344428134774475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flow-based generative models typically define a latent space with
dimensionality identical to the observational space. In many problems, however,
the data does not populate the full ambient data-space that they natively
reside in, rather inhabiting a lower-dimensional manifold. In such scenarios,
flow-based models are unable to represent data structures exactly as their
density will always have support off the data manifold, potentially resulting
in degradation of model performance. In addition, the requirement for equal
latent and data space dimensionality can unnecessarily increase complexity for
contemporary flow models. Towards addressing these problems, we propose to
learn a manifold prior that affords benefits to both sample generation and
representation quality. An auxiliary benefit of our approach is the ability to
identify the intrinsic dimension of the data distribution.
Related papers
- Distribution-Aware Data Expansion with Diffusion Models [55.979857976023695]
We propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model.
DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data.
arXiv Detail & Related papers (2024-03-11T14:07:53Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - RENs: Relevance Encoding Networks [0.0]
This paper proposes relevance encoding networks (RENs): a novel probabilistic VAE-based framework that uses the automatic relevance determination (ARD) prior in the latent space to learn the data-specific bottleneck dimensionality.
We show that the proposed model learns the relevant latent bottleneck dimensionality without compromising the representation and generation quality of the samples.
arXiv Detail & Related papers (2022-05-25T21:53:48Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Normalizing Flows Across Dimensions [10.21537170623373]
We introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions.
NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations.
Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
arXiv Detail & Related papers (2020-06-23T14:47:18Z) - Variational Autoencoder with Learned Latent Structure [4.41370484305827]
We introduce the Variational Autoencoder with Learned Latent Structure (VAELLS)
VAELLS incorporates a learnable manifold model into the latent space of a VAE.
We validate our model on examples with known latent structure and also demonstrate its capabilities on a real-world dataset.
arXiv Detail & Related papers (2020-06-18T14:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.