Flexible Tails for Normalizing Flows
- URL: http://arxiv.org/abs/2406.16971v1
- Date: Sat, 22 Jun 2024 13:44:01 GMT
- Title: Flexible Tails for Normalizing Flows
- Authors: Tennessee Hickling, Dennis Prangle,
- Abstract summary: A popular current solution to this problem is to use a heavy tailed base distribution.
We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input.
We propose an alternative: use a Gaussian base distribution and a final transformation layer which can produce heavy tails.
- Score: 0.658372523529902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Normalizing flows are a flexible class of probability distributions, expressed as transformations of a simple base distribution. A limitation of standard normalizing flows is representing distributions with heavy tails, which arise in applications to both density estimation and variational inference. A popular current solution to this problem is to use a heavy tailed base distribution. Examples include the tail adaptive flow (TAF) methods of Laszkiewicz et al (2022). We argue this can lead to poor performance due to the difficulty of optimising neural networks, such as normalizing flows, under heavy tailed input. This problem is demonstrated in our paper. We propose an alternative: use a Gaussian base distribution and a final transformation layer which can produce heavy tails. We call this approach tail transform flow (TTF). Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight.
Related papers
- Tensorizing flows: a tool for variational inference [0.0]
We introduce an extension of normalizing flows in which the Gaussian reference is replaced with a reference distribution constructed via a tensor network.
We show that by combining flows with tensor networks on difficult variational inference tasks, we can improve on the results obtained by using either tool without the other.
arXiv Detail & Related papers (2023-05-03T23:42:22Z) - GFlowOut: Dropout with Generative Flow Networks [76.59535235717631]
Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference.
Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference.
GFlowOutleverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks.
arXiv Detail & Related papers (2022-10-24T03:00:01Z) - Marginal Tail-Adaptive Normalizing Flows [15.732950126814089]
This paper focuses on improving the ability of normalizing flows to correctly capture the tail behavior.
We prove that the marginal tailedness of an autoregressive flow can be controlled via the tailedness of the marginals of its base distribution.
An empirical analysis shows that the proposed method improves on the accuracy -- especially on the tails of the distribution -- and is able to generate heavy-tailed data.
arXiv Detail & Related papers (2022-06-21T12:34:36Z) - Fat-Tailed Variational Inference with Anisotropic Tail Adaptive Flows [53.32246823168763]
Fat-tailed densities commonly arise as posterior and marginal distributions in robust models and scale mixtures.
We first improve previous theory on tails of Lipschitz flows by quantifying how tails affect the rate of tail decay.
We then develop an alternative theory for tail parameters which is sensitive to tail-anisotropy.
arXiv Detail & Related papers (2022-05-16T18:03:41Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - Copula-Based Normalizing Flows [8.894004971395546]
We generalize the base distribution to a more elaborate copula distribution to capture the properties of the target distribution more accurately.
Our results suggest that the improvements are related to an increased local Lipschitz-stability of the learned flow.
arXiv Detail & Related papers (2021-07-15T14:22:28Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.