Copula-Based Normalizing Flows
- URL: http://arxiv.org/abs/2107.07352v1
- Date: Thu, 15 Jul 2021 14:22:28 GMT
- Title: Copula-Based Normalizing Flows
- Authors: Mike Laszkiewicz, Johannes Lederer, Asja Fischer
- Abstract summary: We generalize the base distribution to a more elaborate copula distribution to capture the properties of the target distribution more accurately.
Our results suggest that the improvements are related to an increased local Lipschitz-stability of the learned flow.
- Score: 8.894004971395546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normalizing flows, which learn a distribution by transforming the data to
samples from a Gaussian base distribution, have proven powerful density
approximations. But their expressive power is limited by this choice of the
base distribution. We, therefore, propose to generalize the base distribution
to a more elaborate copula distribution to capture the properties of the target
distribution more accurately. In a first empirical analysis, we demonstrate
that this replacement can dramatically improve the vanilla normalizing flows in
terms of flexibility, stability, and effectivity for heavy-tailed data. Our
results suggest that the improvements are related to an increased local
Lipschitz-stability of the learned flow.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Tensorizing flows: a tool for variational inference [0.0]
We introduce an extension of normalizing flows in which the Gaussian reference is replaced with a reference distribution constructed via a tensor network.
We show that by combining flows with tensor networks on difficult variational inference tasks, we can improve on the results obtained by using either tool without the other.
arXiv Detail & Related papers (2023-05-03T23:42:22Z) - Flow Away your Differences: Conditional Normalizing Flows as an
Improvement to Reweighting [0.0]
We present an alternative to reweighting techniques for modifying distributions to account for a desired change in an underlying conditional distribution.
We employ conditional normalizing flows to learn the full conditional probability distribution.
In our examples, this leads to a statistical precision up to three times greater than using reweighting techniques with identical sample sizes for the source and target distributions.
arXiv Detail & Related papers (2023-04-28T16:33:50Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Marginal Tail-Adaptive Normalizing Flows [15.732950126814089]
This paper focuses on improving the ability of normalizing flows to correctly capture the tail behavior.
We prove that the marginal tailedness of an autoregressive flow can be controlled via the tailedness of the marginals of its base distribution.
An empirical analysis shows that the proposed method improves on the accuracy -- especially on the tails of the distribution -- and is able to generate heavy-tailed data.
arXiv Detail & Related papers (2022-06-21T12:34:36Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.