Trans-Dimensional Generative Modeling via Jump Diffusion Models
- URL: http://arxiv.org/abs/2305.16261v2
- Date: Mon, 30 Oct 2023 10:14:44 GMT
- Title: Trans-Dimensional Generative Modeling via Jump Diffusion Models
- Authors: Andrew Campbell, William Harvey, Christian Weilbach, Valentin De
Bortoli, Tom Rainforth, Arnaud Doucet
- Abstract summary: We propose a new class of generative models that naturally handle data of varying dimensionality.
We first define a dimension destroying forward noising process, before deriving the dimension creating time-reversed generative process.
Simulating our learned approximation to the time-reversed generative process then provides an effective way of sampling data of varying dimensionality.
- Score: 46.183265841345644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new class of generative models that naturally handle data of
varying dimensionality by jointly modeling the state and dimension of each
datapoint. The generative process is formulated as a jump diffusion process
that makes jumps between different dimensional spaces. We first define a
dimension destroying forward noising process, before deriving the dimension
creating time-reversed generative process along with a novel evidence lower
bound training objective for learning to approximate it. Simulating our learned
approximation to the time-reversed generative process then provides an
effective way of sampling data of varying dimensionality by jointly generating
state values and dimensions. We demonstrate our approach on molecular and video
datasets of varying dimensionality, reporting better compatibility with
test-time diffusion guidance imputation tasks and improved interpolation
capabilities versus fixed dimensional models that generate state values and
dimensions separately.
Related papers
- Latent diffusion models for parameterization and data assimilation of facies-based geomodels [0.0]
Diffusion models are trained to generate new geological realizations from input fields characterized by random noise.
Latent diffusion models are shown to provide realizations that are visually consistent with samples from geomodeling software.
arXiv Detail & Related papers (2024-06-21T01:32:03Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Infinite-Dimensional Diffusion Models [4.342241136871849]
We formulate diffusion-based generative models in infinite dimensions and apply them to the generative modeling of functions.
We show that our formulations are well posed in the infinite-dimensional setting and provide dimension-independent distance bounds from the sample to the target measure.
We also develop guidelines for the design of infinite-dimensional diffusion models.
arXiv Detail & Related papers (2023-02-20T18:00:38Z) - Data-driven low-dimensional dynamic model of Kolmogorov flow [0.0]
Reduced order models (ROMs) that capture flow dynamics are of interest for decreasing computational costs for simulation.
This work presents a data-driven framework for minimal-dimensional models that effectively capture the dynamics and properties of the flow.
We apply this to Kolmogorov flow in a regime consisting of chaotic and intermittent behavior.
arXiv Detail & Related papers (2022-10-29T23:05:39Z) - Learning Neural Generative Dynamics for Molecular Conformation
Generation [89.03173504444415]
We study how to generate molecule conformations (textiti.e., 3D structures) from a molecular graph.
We propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.
arXiv Detail & Related papers (2021-02-20T03:17:58Z) - Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows [40.9137348900942]
We propose a novel type of flow driven by a differential deformation of the Wiener process.
As a result, we obtain a rich time series model whose observable process inherits many of the appealing properties of its base process.
arXiv Detail & Related papers (2020-02-24T20:13:43Z) - VFlow: More Expressive Generative Flows with Variational Data
Augmentation [33.431861316434706]
tractability imposes architectural constraints on generative flows, making them less expressive than other types of generative models.
We tackle this constraint by augmenting the data with some extra dimensions and jointly learning a generative flow for augmented data.
Our approach, VFlow, is a generalization of generative flows and therefore always performs better.
arXiv Detail & Related papers (2020-02-22T18:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.