Quasi-Autoregressive Residual (QuAR) Flows
- URL: http://arxiv.org/abs/2009.07419v1
- Date: Wed, 16 Sep 2020 01:56:24 GMT
- Title: Quasi-Autoregressive Residual (QuAR) Flows
- Authors: Achintya Gopal
- Abstract summary: We introduce a simplification to residual flows using a Quasi-Autoregressive (QuAR) approach.
Compared to the standard residual flow approach, this simplification retains many of the benefits of residual flows while dramatically reducing the compute time and memory requirements.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Normalizing Flows are a powerful technique for learning and modeling
probability distributions given samples from those distributions. The current
state of the art results are built upon residual flows as these can model a
larger hypothesis space than coupling layers. However, residual flows are
extremely computationally expensive both to train and to use, which limits
their applicability in practice. In this paper, we introduce a simplification
to residual flows using a Quasi-Autoregressive (QuAR) approach. Compared to the
standard residual flow approach, this simplification retains many of the
benefits of residual flows while dramatically reducing the compute time and
memory requirements, thus making flow-based modeling approaches far more
tractable and broadening their potential applicability.
Related papers
- Verlet Flows: Exact-Likelihood Integrators for Flow-Based Generative Models [4.9425328004453375]
We present Verlet flows, a class of CNFs on an augmented state-space inspired by symplectic from Hamiltonian dynamics.
Verlet flows provide exact-likelihood generative models which generalize coupled flow architectures from a non-continuous setting while imposing minimal expressivity constraints.
On experiments over toy densities, we demonstrate that the variance of the commonly used Hutchinson trace estimator is unsuitable for importance sampling, whereas Verlet flows perform comparably to full autograd trace computations while being significantly faster.
arXiv Detail & Related papers (2024-05-05T03:47:56Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Attentive Contractive Flow with Lipschitz-constrained Self-Attention [25.84621883831624]
We introduce a novel approach called Attentive Contractive Flow (ACF)
ACF utilizes a special category of flow-based generative models - contractive flows.
We demonstrate that ACF can be introduced into a variety of state of the art flow models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-24T18:02:49Z) - Universal Approximation of Residual Flows in Maximum Mean Discrepancy [24.493721984271566]
We study residual flows, a class of normalizing flows composed of Lipschitz residual blocks.
We prove residual flows are universal approximators in maximum mean discrepancy.
arXiv Detail & Related papers (2021-03-10T00:16:33Z) - Towards Recurrent Autoregressive Flow Models [39.25035894474609]
We present Recurrent Autoregressive Flows as a method toward general process modeling with normalizing flows.
The proposed method defines a conditional distribution for each variable in a sequential process by conditioning the parameters of a normalizing flow with recurrent neural connections.
We demonstrate the effectiveness of this class of models through a series of experiments in which models are trained on three complex processes.
arXiv Detail & Related papers (2020-06-17T18:38:36Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z) - Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow [16.41460104376002]
We introduce subset flows, a class of flows that can transform finite volumes and allow exact computation of likelihoods for discrete data.
We identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows.
We demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.
arXiv Detail & Related papers (2020-02-06T22:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.