General Invertible Transformations for Flow-based Generative Modeling
- URL: http://arxiv.org/abs/2011.15056v1
- Date: Mon, 30 Nov 2020 17:54:43 GMT
- Title: General Invertible Transformations for Flow-based Generative Modeling
- Authors: Jakub M. Tomczak
- Abstract summary: We show that many well-known invertible tranformations in reversible logic and reversible neural networks could be derived from our proposition.
Next, we propose two new coupling layers that are important building blocks of flow-based generative models.
- Score: 11.04121146441257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a new class of invertible transformations. We
indicate that many well-known invertible tranformations in reversible logic and
reversible neural networks could be derived from our proposition. Next, we
propose two new coupling layers that are important building blocks of
flow-based generative models. In the preliminary experiments on toy digit data,
we present how these new coupling layers could be used in Integer Discrete
Flows (IDF), and that they achieve better results than standard coupling layers
used in IDF and RealNVP.
Related papers
- Transformer Neural Autoregressive Flows [48.68932811531102]
Density estimation can be performed using Normalizing Flows (NFs)
We propose a novel solution by exploiting transformers to define a new class of neural flows called Transformer Neural Autoregressive Flows (T-NAFs)
arXiv Detail & Related papers (2024-01-03T17:51:16Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Funnels: Exact maximum likelihood with dimensionality reduction [6.201770337181472]
We use the SurVAE framework to construct dimension reducing surjective flows via a new layer, known as the funnel.
We demonstrate its efficacy on a variety of datasets, and show it improves upon or matches the performance of existing flows while having a reduced latent space size.
arXiv Detail & Related papers (2021-12-15T12:20:25Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows [78.77808270452974]
SurVAE Flows is a modular framework for composable transformations that encompasses VAEs and normalizing flows.
We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows.
arXiv Detail & Related papers (2020-07-06T13:13:22Z) - Normalizing Flows Across Dimensions [10.21537170623373]
We introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions.
NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations.
Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
arXiv Detail & Related papers (2020-06-23T14:47:18Z) - Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow [16.41460104376002]
We introduce subset flows, a class of flows that can transform finite volumes and allow exact computation of likelihoods for discrete data.
We identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows.
We demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.
arXiv Detail & Related papers (2020-02-06T22:58:51Z) - Invertible Generative Modeling using Linear Rational Splines [11.510009152620666]
Normalizing flows attempt to model an arbitrary probability distribution through a set of invertible mappings.
The first flow designs used coupling layer mappings built upon affine transformations.
Intrepid piecewise functions as a replacement for affine transformations have attracted attention.
arXiv Detail & Related papers (2020-01-15T08:05:55Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.