Generation of data on discontinuous manifolds via continuous stochastic
non-invertible networks
- URL: http://arxiv.org/abs/2112.09646v1
- Date: Fri, 17 Dec 2021 17:39:59 GMT
- Title: Generation of data on discontinuous manifolds via continuous stochastic
non-invertible networks
- Authors: Mariia Drozdova, Vitaliy Kinakh, Guillaume Qu\'etant, Tobias Golling,
Slava Voloshynovskiy
- Abstract summary: We show how to generate discontinuous distributions using continuous networks.
We derive a link between the cost functions and the information-theoretic formulation.
We apply our approach to synthetic 2D distributions to demonstrate both reconstruction and generation of discontinuous distributions.
- Score: 6.201770337181472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The generation of discontinuous distributions is a difficult task for most
known frameworks such as generative autoencoders and generative adversarial
networks. Generative non-invertible models are unable to accurately generate
such distributions, require long training and often are subject to mode
collapse. Variational autoencoders (VAEs), which are based on the idea of
keeping the latent space to be Gaussian for the sake of a simple sampling,
allow an accurate reconstruction, while they experience significant limitations
at generation task. In this work, instead of trying to keep the latent space to
be Gaussian, we use a pre-trained contrastive encoder to obtain a clustered
latent space. Then, for each cluster, representing a unimodal submanifold, we
train a dedicated low complexity network to generate this submanifold from the
Gaussian distribution. The proposed framework is based on the
information-theoretic formulation of mutual information maximization between
the input data and latent space representation. We derive a link between the
cost functions and the information-theoretic formulation. We apply our approach
to synthetic 2D distributions to demonstrate both reconstruction and generation
of discontinuous distributions using continuous stochastic networks.
Related papers
- Bellman Diffusion: Generative Modeling as Learning a Linear Operator in the Distribution Space [72.52365911990935]
We introduce Bellman Diffusion, a novel DGM framework that maintains linearity in MDPs through gradient and scalar field modeling.
Our results show that Bellman Diffusion achieves accurate field estimations and is a capable image generator, converging 1.5x faster than the traditional histogram-based baseline in distributional RL tasks.
arXiv Detail & Related papers (2024-10-02T17:53:23Z) - Constrained Diffusion Models via Dual Training [80.03953599062365]
We develop constrained diffusion models based on desired distributions informed by requirements.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents [41.86208391836456]
We propose DisCo-Diff to simplify encoding a complex data distribution into a single continuous Gaussian distribution.
DisCo-Diff does not rely on pre-trained networks, making the framework universally applicable.
We validate DisCo-Diff on toy data, several image synthesis tasks as well as molecular docking, and find that introducing discrete latents consistently improves model performance.
arXiv Detail & Related papers (2024-07-03T17:42:46Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Fully Embedded Time-Series Generative Adversarial Networks [0.0]
Generative Adversarial Networks (GANs) should produce synthetic data that fits the underlying distribution of the data being modeled.
For real valued time-series data, this implies the need to simultaneously capture the static distribution of the data, but also the full temporal distribution of the data for any potential time horizon.
In FETSGAN, entire sequences are translated directly to the generator's sampling space using a seq2seq style adversarial auto encoder (AAE)
arXiv Detail & Related papers (2023-08-30T03:14:02Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - From Points to Functions: Infinite-dimensional Representations in
Diffusion Models [23.916417852496608]
Diffusion-based generative models learn to iteratively transfer unstructured noise to a complex target distribution.
We show that a combination of information content from different time steps gives a strictly better representation for the downstream task.
arXiv Detail & Related papers (2022-10-25T05:30:53Z) - Generative Model without Prior Distribution Matching [26.91643368299913]
Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution.
We propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior.
arXiv Detail & Related papers (2020-09-23T09:33:24Z) - Learning disconnected manifolds: a no GANs land [15.4867805276559]
Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator.
We establish a no free lunch theorem for the disconnected manifold learning stating an upper bound on the precision of the targeted distribution.
We derive a rejection sampling method based on the norm of generators Jacobian and show its efficiency on several generators including BigGAN.
arXiv Detail & Related papers (2020-06-08T13:45:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.