Invertible Generative Modeling using Linear Rational Splines
- URL: http://arxiv.org/abs/2001.05168v4
- Date: Mon, 13 Apr 2020 00:01:14 GMT
- Title: Invertible Generative Modeling using Linear Rational Splines
- Authors: Hadi M. Dolatabadi and Sarah Erfani and Christopher Leckie
- Abstract summary: Normalizing flows attempt to model an arbitrary probability distribution through a set of invertible mappings.
The first flow designs used coupling layer mappings built upon affine transformations.
Intrepid piecewise functions as a replacement for affine transformations have attracted attention.
- Score: 11.510009152620666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Normalizing flows attempt to model an arbitrary probability distribution
through a set of invertible mappings. These transformations are required to
achieve a tractable Jacobian determinant that can be used in high-dimensional
scenarios. The first normalizing flow designs used coupling layer mappings
built upon affine transformations. The significant advantage of such models is
their easy-to-compute inverse. Nevertheless, making use of affine
transformations may limit the expressiveness of such models. Recently,
invertible piecewise polynomial functions as a replacement for affine
transformations have attracted attention. However, these methods require
solving a polynomial equation to calculate their inverse. In this paper, we
explore using linear rational splines as a replacement for affine
transformations used in coupling layers. Besides having a straightforward
inverse, inference and generation have similar cost and architecture in this
method. Moreover, simulation results demonstrate the competitiveness of this
approach's performance compared to existing methods.
Related papers
- RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-identification [59.5042031913258]
Non-linear modality discrepancy mainly comes from diverse linear transformations acting on the surface of different materials.
We propose a Random Linear Enhancement (RLE) strategy which includes Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE)
The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification.
arXiv Detail & Related papers (2024-11-02T12:13:37Z) - Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning? [69.4145579827826]
We show a fast flow on the regression loss despite the gradient non-ity algorithms for our convergence landscape.
This is the first theoretical analysis for multi-layer Transformer in this setting.
arXiv Detail & Related papers (2024-10-10T18:29:05Z) - Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Optimal Matrix-Mimetic Tensor Algebras via Variable Projection [0.0]
Matrix mimeticity arises from interpreting tensors as operators that can be multiplied, factorized, and analyzed analogous to matrices.
We learn optimal linear mappings and corresponding tensor representations without relying on prior knowledge of the data.
We provide original theory of uniqueness of the transformation and convergence analysis of our variable-projection-based algorithm.
arXiv Detail & Related papers (2024-06-11T04:52:23Z) - Tensor Component Analysis for Interpreting the Latent Space of GANs [41.020230946351816]
This paper addresses the problem of finding interpretable directions in the latent space of pre-trained Generative Adversarial Networks (GANs)
Our scheme allows for both linear edits corresponding to the individual modes of the tensor, and non-linear ones that model the multiplicative interactions between them.
We show experimentally that we can utilise the former to better separate style- from geometry-based transformations, and the latter to generate an extended set of possible transformations.
arXiv Detail & Related papers (2021-11-23T09:14:39Z) - Generalized Optimization: A First Step Towards Category Theoretic
Learning Theory [1.52292571922932]
We generalize several optimization algorithms, including a straightforward generalization of gradient descent and a novel generalization of Newton's method.
We show that the transformation invariances of these algorithms are preserved.
We also show that we can express the change in loss of generalized descent with an inner product-like expression.
arXiv Detail & Related papers (2021-09-20T15:19:06Z) - Learning Non-linear Wavelet Transformation via Normalizing Flow [0.0]
An invertible transformation can be learned by a designed normalizing flow model.
With a factor-out scheme resembling the wavelet downsampling mechanism, one can train normalizing flow models to factor-out variables corresponding to fast patterns.
An analysis of the learned model in terms of low-pass/high-pass filters is given.
arXiv Detail & Related papers (2021-01-27T10:28:51Z) - Training Invertible Linear Layers through Rank-One Perturbations [0.0]
This work presents a novel approach for training invertible linear layers.
In lieu of directly optimizing the network parameters, we train rank-one perturbations and add them to the actual weight matrices infrequently.
We show how such invertible blocks improve the mixing and thus normalizing the mode separation of the resulting flows.
arXiv Detail & Related papers (2020-10-14T12:43:47Z) - SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows [78.77808270452974]
SurVAE Flows is a modular framework for composable transformations that encompasses VAEs and normalizing flows.
We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows.
arXiv Detail & Related papers (2020-07-06T13:13:22Z) - The Convolution Exponential and Generalized Sylvester Flows [82.18442368078804]
This paper introduces a new method to build linear flows, by taking the exponential of a linear transformation.
An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers.
We show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10.
arXiv Detail & Related papers (2020-06-02T19:43:36Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.