Generative Models as Distributions of Functions
- URL: http://arxiv.org/abs/2102.04776v1
- Date: Tue, 9 Feb 2021 11:47:55 GMT
- Title: Generative Models as Distributions of Functions
- Authors: Emilien Dupont, Yee Whye Teh, Arnaud Doucet
- Abstract summary: Generative models are typically trained on grid-like data such as images.
In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions.
- Score: 72.2682083758999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are typically trained on grid-like data such as images. As
a result, the size of these models usually scales directly with the underlying
grid resolution. In this paper, we abandon discretized grids and instead
parameterize individual data points by continuous functions. We then build
generative models by learning distributions over such functions. By treating
data points as functions, we can abstract away from the specific type of data
we train on and construct models that scale independently of signal resolution
and dimension. To train our model, we use an adversarial approach with a
discriminator that acts directly on continuous signals. Through experiments on
both images and 3D shapes, we demonstrate that our model can learn rich
distributions of functions independently of data type and resolution.
Related papers
- Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - Diffusion with Forward Models: Solving Stochastic Inverse Problems
Without Direct Supervision [76.32860119056964]
We propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed.
We demonstrate the effectiveness of our method on three challenging computer vision tasks.
arXiv Detail & Related papers (2023-06-20T17:53:00Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Diffusion Generative Models in Infinite Dimensions [10.15736468214228]
We generalize diffusion generative models to operate directly in function space.
A significant benefit of our function space point of view is that it allows us to explicitly specify the space of functions we are working in.
Our approach allows us to perform both unconditional and conditional generation of function-valued data.
arXiv Detail & Related papers (2022-12-01T21:54:19Z) - OCD: Learning to Overfit with Conditional Diffusion Models [95.1828574518325]
We present a dynamic model in which the weights are conditioned on an input sample x.
We learn to match those weights that would be obtained by finetuning a base model on x and its label y.
arXiv Detail & Related papers (2022-10-02T09:42:47Z) - Modelling nonlinear dependencies in the latent space of inverse
scattering [1.5990720051907859]
In inverse scattering proposed by Angles and Mallat, a deep neural network is trained to invert the scattering transform applied to an image.
After such a network is trained, it can be used as a generative model given that we can sample from the distribution of principal components of scattering coefficients.
Within this paper, two such models are explored, namely a Variational AutoEncoder and a Generative Adversarial Network.
arXiv Detail & Related papers (2022-03-19T12:07:43Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement
and Data Generation [31.38329747789168]
We propose a sequential variational autoencoder to learn disentangled representations of sequential data under self-supervision.
We exploit the benefits of some readily accessible supervisory signals from input data itself or some off-the-shelf functional models.
Our model can easily disentangle the representation of an input sequence into static factors and dynamic factors.
arXiv Detail & Related papers (2020-05-23T00:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.