PixelTransformer: Sample Conditioned Signal Generation
- URL: http://arxiv.org/abs/2103.15813v1
- Date: Mon, 29 Mar 2021 17:59:33 GMT
- Title: PixelTransformer: Sample Conditioned Signal Generation
- Authors: Shubham Tulsiani, Abhinav Gupta
- Abstract summary: We propose a generative model that can infer a distribution for the underlying signal conditioned on sparse samples.
In contrast to sequential autoregressive generative models, our model allows conditioning on arbitrary samples and can answer distributional queries for any location.
- Score: 60.764218381636184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a generative model that can infer a distribution for the
underlying spatial signal conditioned on sparse samples e.g. plausible images
given a few observed pixels. In contrast to sequential autoregressive
generative models, our model allows conditioning on arbitrary samples and can
answer distributional queries for any location. We empirically validate our
approach across three image datasets and show that we learn to generate diverse
and meaningful samples, with the distribution variance reducing given more
observed pixels. We also show that our approach is applicable beyond images and
can allow generating other types of spatial outputs e.g. polynomials, 3D
shapes, and videos.
Related papers
- Diffusion with Forward Models: Solving Stochastic Inverse Problems
Without Direct Supervision [76.32860119056964]
We propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed.
We demonstrate the effectiveness of our method on three challenging computer vision tasks.
arXiv Detail & Related papers (2023-06-20T17:53:00Z) - Rethinking Polyp Segmentation from an Out-of-Distribution Perspective [37.1338930936671]
We leverage the ability of masked autoencoders -- self-supervised vision transformers trained on a reconstruction task -- to learn in-distribution representations.
We perform out-of-distribution reconstruction and inference, with feature space standardisation to align the latent distribution of the diverse abnormal samples with the statistics of the healthy samples.
Experimental results on six benchmarks show that our model has excellent segmentation performance and generalises across datasets.
arXiv Detail & Related papers (2023-06-13T14:13:16Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Learning to Generate 3D Representations of Building Roofs Using
Single-View Aerial Imagery [68.3565370706598]
We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image.
Unlike alternative methods that require multiple images of the same object, our approach enables estimating 3D roof meshes using only a single image for predictions.
arXiv Detail & Related papers (2023-03-20T15:47:05Z) - Example-Based Sampling with Diffusion Models [7.943023838493658]
diffusion models for image generation could be appropriate for learning how to generate point sets from examples.
We propose a generic way to produce 2-d point sets imitating existing samplers from observed point sets using a diffusion model.
We demonstrate how the differentiability of our approach can be used to optimize point sets to enforce properties.
arXiv Detail & Related papers (2023-02-10T08:35:17Z) - Structured Uncertainty in the Observation Space of Variational
Autoencoders [20.709989481734794]
In image synthesis, sampling from such distributions produces spatially-incoherent results with uncorrelated pixel noise.
We propose an alternative model for the observation space, encoding spatial dependencies via a low-rank parameterisation.
In contrast to pixel-wise independent distributions, our samples seem to contain semantically meaningful variations from the mean allowing the prediction of multiple plausible outputs.
arXiv Detail & Related papers (2022-05-25T07:12:50Z) - Generative Models as Distributions of Functions [72.2682083758999]
Generative models are typically trained on grid-like data such as images.
In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions.
arXiv Detail & Related papers (2021-02-09T11:47:55Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.