Sinusoidal Flow: A Fast Invertible Autoregressive Flow
- URL: http://arxiv.org/abs/2110.13344v1
- Date: Tue, 26 Oct 2021 01:20:29 GMT
- Title: Sinusoidal Flow: A Fast Invertible Autoregressive Flow
- Authors: Yumou Wei
- Abstract summary: We propose a new type of normalising flows that inherits the expressive power and triangular Jacobian from fully autoregressive flows.
Experiments show that our Sinusoidal Flow is not only able to model complex distributions, but can also be reliably inverted to generate realistic-looking samples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Normalising flows offer a flexible way of modelling continuous probability
distributions. We consider expressiveness, fast inversion and exact Jacobian
determinant as three desirable properties a normalising flow should possess.
However, few flow models have been able to strike a good balance among all
these properties. Realising that the integral of a convex sum of sinusoidal
functions squared leads to a bijective residual transformation, we propose
Sinusoidal Flow, a new type of normalising flows that inherits the expressive
power and triangular Jacobian from fully autoregressive flows while guaranteed
by Banach fixed-point theorem to remain fast invertible and thereby obviate the
need for sequential inversion typically required in fully autoregressive flows.
Experiments show that our Sinusoidal Flow is not only able to model complex
distributions, but can also be reliably inverted to generate realistic-looking
samples even with many layers of transformations stacked.
Related papers
- Verlet Flows: Exact-Likelihood Integrators for Flow-Based Generative Models [4.9425328004453375]
We present Verlet flows, a class of CNFs on an augmented state-space inspired by symplectic from Hamiltonian dynamics.
Verlet flows provide exact-likelihood generative models which generalize coupled flow architectures from a non-continuous setting while imposing minimal expressivity constraints.
On experiments over toy densities, we demonstrate that the variance of the commonly used Hutchinson trace estimator is unsuitable for importance sampling, whereas Verlet flows perform comparably to full autograd trace computations while being significantly faster.
arXiv Detail & Related papers (2024-05-05T03:47:56Z) - Delving into Discrete Normalizing Flows on SO(3) Manifold for
Probabilistic Rotation Modeling [30.09829541716024]
We propose a novel normalizing flow on SO(3) manifold.
We show that our rotation normalizing flows significantly outperform the baselines on both unconditional and conditional tasks.
arXiv Detail & Related papers (2023-04-08T06:52:02Z) - Flowformer: Linearizing Transformers with Conservation Flows [77.25101425464773]
We linearize Transformers free from specific inductive biases based on the flow network theory.
By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions.
arXiv Detail & Related papers (2022-02-13T08:44:10Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Equivariant Discrete Normalizing Flows [10.867162810786361]
We focus on building equivariant normalizing flows using discrete layers.
We introduce two new equivariant flows: $G$-coupling Flows and $G$-Residual Flows.
Our construction of $G$-Residual Flows are also universal, in the sense that we prove an $G$-equivariant diffeomorphism can be exactly mapped by a $G$-residual flow.
arXiv Detail & Related papers (2021-10-16T20:16:00Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z) - SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows [78.77808270452974]
SurVAE Flows is a modular framework for composable transformations that encompasses VAEs and normalizing flows.
We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows.
arXiv Detail & Related papers (2020-07-06T13:13:22Z) - The Convolution Exponential and Generalized Sylvester Flows [82.18442368078804]
This paper introduces a new method to build linear flows, by taking the exponential of a linear transformation.
An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers.
We show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10.
arXiv Detail & Related papers (2020-06-02T19:43:36Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.