Anytime Sampling for Autoregressive Models via Ordered Autoencoding
- URL: http://arxiv.org/abs/2102.11495v1
- Date: Tue, 23 Feb 2021 05:13:16 GMT
- Title: Anytime Sampling for Autoregressive Models via Ordered Autoencoding
- Authors: Yilun Xu, Yang Song, Sahaj Garg, Linyuan Gong, Rui Shu, Aditya Grover,
Stefano Ermon
- Abstract summary: Autoregressive models are widely used for tasks such as image and audio generation.
The sampling process of these models does not allow interruptions and cannot adapt to real-time computational resources.
We propose a new family of autoregressive models that enables anytime sampling.
- Score: 88.01906682843618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoregressive models are widely used for tasks such as image and audio
generation. The sampling process of these models, however, does not allow
interruptions and cannot adapt to real-time computational resources. This
challenge impedes the deployment of powerful autoregressive models, which
involve a slow sampling process that is sequential in nature and typically
scales linearly with respect to the data dimension. To address this difficulty,
we propose a new family of autoregressive models that enables anytime sampling.
Inspired by Principal Component Analysis, we learn a structured representation
space where dimensions are ordered based on their importance with respect to
reconstruction. Using an autoregressive model in this latent space, we trade
off sample quality for computational efficiency by truncating the generation
process before decoding into the original data space. Experimentally, we
demonstrate in several image and audio generation tasks that sample quality
degrades gracefully as we reduce the computational budget for sampling. The
approach suffers almost no loss in sample quality (measured by FID) using only
60\% to 80\% of all latent dimensions for image data. Code is available at
https://github.com/Newbeeer/Anytime-Auto-Regressive-Model .
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - One Step Diffusion via Shortcut Models [109.72495454280627]
We introduce shortcut models, a family of generative models that use a single network and training phase to produce high-quality samples.
Shortcut models condition the network on the current noise level and also on the desired step size, allowing the model to skip ahead in the generation process.
Compared to distillation, shortcut models reduce complexity to a single network and training phase and additionally allow varying step budgets at inference time.
arXiv Detail & Related papers (2024-10-16T13:34:40Z) - Generalizing to Out-of-Sample Degradations via Model Reprogramming [29.56470202794348]
Out-of-Sample Restoration (OSR) task aims to develop restoration models capable of handling out-of-sample degradations.
We propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions.
arXiv Detail & Related papers (2024-03-09T11:56:26Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Latent Autoregressive Source Separation [5.871054749661012]
This paper introduces vector-quantized Latent Autoregressive Source Separation (i.e., de-mixing an input signal into its constituent sources) without requiring additional gradient-based optimization or modifications of existing models.
Our separation method relies on the Bayesian formulation in which the autoregressive models are the priors, and a discrete (non-parametric) likelihood function is constructed by performing frequency counts over latent sums of addend tokens.
arXiv Detail & Related papers (2023-01-09T17:32:00Z) - Megapixel Image Generation with Step-Unrolled Denoising Autoencoders [5.145313322824774]
We propose a combination of techniques to push sample resolutions higher and reduce computational requirements for training and sampling.
These include vector-quantized GAN (VQ-GAN), a vector-quantization (VQ) model capable of high levels of lossy - but perceptually insignificant - compression; hourglass transformers, a highly scaleable self-attention model; and step-unrolled denoising autoencoders (SUNDAE), a non-autoregressive (NAR) text generative model.
Our proposed framework scales to high-resolutions ($1024 times 1024$) and trains quickly (
arXiv Detail & Related papers (2022-06-24T15:47:42Z) - Improved Autoregressive Modeling with Distribution Smoothing [106.14646411432823]
Autoregressive models excel at image compression, but their sample quality is often lacking.
Inspired by a successful adversarial defense method, we incorporate randomized smoothing into autoregressive generative modeling.
arXiv Detail & Related papers (2021-03-28T09:21:20Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Instance Selection for GANs [25.196177369030146]
Generative Adversarial Networks (GANs) have led to their widespread adoption for the purposes of generating high quality synthetic imagery.
GANs often produce unrealistic samples which fall outside of the data manifold.
We propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place.
arXiv Detail & Related papers (2020-07-30T06:33:51Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.