Principled Interpolation in Normalizing Flows
- URL: http://arxiv.org/abs/2010.12059v1
- Date: Thu, 22 Oct 2020 21:02:10 GMT
- Title: Principled Interpolation in Normalizing Flows
- Authors: Samuel G. Fadel and Sebastian Mair and Ricardo da S. Torres and Ulf
Brefeld
- Abstract summary: Generative models based on normalizing flows are very successful in modeling complex data distributions.
straightforward linears show unexpected side effects, as paths lie outside the area where samples are observed.
This observation suggests that correcting the norm should generally result in betters, but it is not clear how to correct the norm in an unambiguous way.
- Score: 5.582101184758527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models based on normalizing flows are very successful in modeling
complex data distributions using simpler ones. However, straightforward linear
interpolations show unexpected side effects, as interpolation paths lie outside
the area where samples are observed. This is caused by the standard choice of
Gaussian base distributions and can be seen in the norms of the interpolated
samples. This observation suggests that correcting the norm should generally
result in better interpolations, but it is not clear how to correct the norm in
an unambiguous way. In this paper, we solve this issue by enforcing a fixed
norm and, hence, change the base distribution, to allow for a principled way of
interpolation. Specifically, we use the Dirichlet and von Mises-Fisher base
distributions. Our experimental results show superior performance in terms of
bits per dimension, Fr\'echet Inception Distance (FID), and Kernel Inception
Distance (KID) scores for interpolation, while maintaining the same generative
performance.
Related papers
- Generalization error of min-norm interpolators in transfer learning [2.7309692684728617]
Min-norm interpolators emerge naturally as implicit regularized limits of modern machine learning algorithms.
In many applications, a limited amount of test data may be available during training, yet properties of min-norm in this setting are not well-understood.
We establish a novel anisotropic local law to achieve these characterizations.
arXiv Detail & Related papers (2024-06-20T02:23:28Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - The Implicit Bias of Batch Normalization in Linear Models and Two-layer
Linear Convolutional Neural Networks [117.93273337740442]
We show that gradient descent converges to a uniform margin classifier on the training data with an $exp(-Omega(log2 t))$ convergence rate.
We also show that batch normalization has an implicit bias towards a patch-wise uniform margin.
arXiv Detail & Related papers (2023-06-20T16:58:00Z) - Piecewise Normalizing Flows [0.0]
A mismatch between the topology of the target and the base can result in a poor performance.
A number of different works have attempted to modify the topology of the base distribution to better match the target.
We introduce piecewise normalizing flows which divide the target distribution into clusters, with topologies that better match the standard normal base distribution.
arXiv Detail & Related papers (2023-05-04T15:30:10Z) - Matching Normalizing Flows and Probability Paths on Manifolds [57.95251557443005]
Continuous Normalizing Flows (CNFs) are generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE)
We propose to train CNFs by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path.
We show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks.
arXiv Detail & Related papers (2022-07-11T08:50:19Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Particle Filter Bridge Interpolation [0.0]
We build on a previously introduced method for generating dimension independents.
We introduce a discriminator network that accurately identifies areas of high representation density.
The resulting sampling procedure allows for greater variability in paths and stronger drift towards areas of high data density.
arXiv Detail & Related papers (2021-03-27T18:33:00Z) - New Bounds For Distributed Mean Estimation and Variance Reduction [25.815612182815702]
We consider the problem of distributed mean estimation (DME) in which $n$ machines are each given a local $d$-dimensional vector $x_v in mathbbRd$.
We show that our method yields practical improvements for common applications, relative to prior approaches.
arXiv Detail & Related papers (2020-02-21T13:27:13Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.