Improving Sequential Latent Variable Models with Autoregressive Flows
- URL: http://arxiv.org/abs/2010.03172v2
- Date: Tue, 8 Mar 2022 05:32:41 GMT
- Title: Improving Sequential Latent Variable Models with Autoregressive Flows
- Authors: Joseph Marino, Lei Chen, Jiawei He, Stephan Mandt
- Abstract summary: We propose an approach for improving sequence modeling based on autoregressive normalizing flows.
Results are presented on three benchmark video datasets, where autoregressive flow-based dynamics improve log-likelihood performance.
- Score: 30.053464816814348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an approach for improving sequence modeling based on
autoregressive normalizing flows. Each autoregressive transform, acting across
time, serves as a moving frame of reference, removing temporal correlations,
and simplifying the modeling of higher-level dynamics. This technique provides
a simple, general-purpose method for improving sequence modeling, with
connections to existing and classical techniques. We demonstrate the proposed
approach both with standalone flow-based models and as a component within
sequential latent variable models. Results are presented on three benchmark
video datasets, where autoregressive flow-based dynamics improve log-likelihood
performance over baseline models. Finally, we illustrate the decorrelation and
improved generalization properties of using flow-based dynamics.
Related papers
- Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Diffusion Action Segmentation [63.061058214427085]
We propose a novel framework via denoising diffusion models, which shares the same inherent spirit of such iterative refinement.
In this framework, action predictions are iteratively generated from random noise with input video features as conditions.
arXiv Detail & Related papers (2023-03-31T10:53:24Z) - Enhancing Deep Traffic Forecasting Models with Dynamic Regression [15.31488551912888]
This paper introduces a dynamic regression (DR) framework to enhance existing traffic forecasting models by structured learning for the residual process.
We evaluate the effectiveness of the proposed framework on deep traffic forecasting models using both speed and flow datasets.
arXiv Detail & Related papers (2023-01-17T01:12:44Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Parallel and Flexible Sampling from Autoregressive Models via Langevin
Dynamics [13.097161185372151]
We propose a sampling procedure that initializes a sequence with white noise and follows a Markov chain defined by Langevin dynamics on the global log-likelihood of the sequence.
We apply these techniques to autoregressive models in the visual and audio domains, with competitive results for audio source separation, super-resolution, and inpainting.
arXiv Detail & Related papers (2021-05-17T21:07:02Z) - Autoregressive Dynamics Models for Offline Policy Evaluation and
Optimization [60.73540999409032]
We show that expressive autoregressive dynamics models generate different dimensions of the next state and reward sequentially conditioned on previous dimensions.
We also show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer.
arXiv Detail & Related papers (2021-04-28T16:48:44Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.