Principled Interpolation in Normalizing Flows
- URL: http://arxiv.org/abs/2010.12059v2
- Date: Tue, 08 Apr 2025 14:02:40 GMT
- Title: Principled Interpolation in Normalizing Flows
- Authors: Samuel G. Fadel, Sebastian Mair, Ricardo da S. Torres, Ulf Brefeld,
- Abstract summary: We show that changing the way of interpolating should generally result in betters, but it is not clear how to do that in an unambiguous way.<n>Specifically, we use the Dirichlet and von Mises-Fisher base distributions on the probability simplex and the hypersphere.
- Score: 3.9123551183847964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models based on normalizing flows are very successful in modeling complex data distributions using simpler ones. However, straightforward linear interpolations show unexpected side effects, as interpolation paths lie outside the area where samples are observed. This is caused by the standard choice of Gaussian base distributions and can be seen in the norms of the interpolated samples as they are outside the data manifold. This observation suggests that changing the way of interpolating should generally result in better interpolations, but it is not clear how to do that in an unambiguous way. In this paper, we solve this issue by enforcing a specific manifold and, hence, change the base distribution, to allow for a principled way of interpolation. Specifically, we use the Dirichlet and von Mises-Fisher base distributions on the probability simplex and the hypersphere, respectively. Our experimental results show superior performance in terms of bits per dimension, Fr\'echet Inception Distance (FID), and Kernel Inception Distance (KID) scores for interpolation, while maintaining the generative performance.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional dependencies for general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Generalization error of min-norm interpolators in transfer learning [2.7309692684728617]
Min-norm interpolators emerge naturally as implicit regularized limits of modern machine learning algorithms.
In many applications, a limited amount of test data may be available during training, yet properties of min-norm in this setting are not well-understood.
We establish a novel anisotropic local law to achieve these characterizations.
arXiv Detail & Related papers (2024-06-20T02:23:28Z) - Metric Flow Matching for Smooth Interpolations on the Data Manifold [40.24392451848883]
Metric Flow Matching (MFM) is a novel simulation-free framework for conditional flow matching.
We propose MFM as a framework for conditional paths that transform a source distribution into a target distribution.
We test MFM on a suite of challenges including LiDAR navigation, unpaired image translation, and modeling cellular dynamics.
arXiv Detail & Related papers (2024-05-23T16:48:06Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - The Implicit Bias of Batch Normalization in Linear Models and Two-layer
Linear Convolutional Neural Networks [117.93273337740442]
We show that gradient descent converges to a uniform margin classifier on the training data with an $exp(-Omega(log2 t))$ convergence rate.
We also show that batch normalization has an implicit bias towards a patch-wise uniform margin.
arXiv Detail & Related papers (2023-06-20T16:58:00Z) - Piecewise Normalizing Flows [0.0]
A mismatch between the topology of the target and the base can result in a poor performance.
A number of different works have attempted to modify the topology of the base distribution to better match the target.
We introduce piecewise normalizing flows which divide the target distribution into clusters, with topologies that better match the standard normal base distribution.
arXiv Detail & Related papers (2023-05-04T15:30:10Z) - Building Normalizing Flows with Stochastic Interpolants [11.22149158986164]
A simple generative quadratic model based on a continuous-time normalizing flow between any pair of base and target distributions is proposed.
The velocity field of this flow is inferred from the probability current of a time-dependent distribution that interpolates between the base and the target in finite time.
arXiv Detail & Related papers (2022-09-30T16:30:31Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Matching Normalizing Flows and Probability Paths on Manifolds [57.95251557443005]
Continuous Normalizing Flows (CNFs) are generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE)
We propose to train CNFs by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path.
We show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks.
arXiv Detail & Related papers (2022-07-11T08:50:19Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Particle Filter Bridge Interpolation [0.0]
We build on a previously introduced method for generating dimension independents.
We introduce a discriminator network that accurately identifies areas of high representation density.
The resulting sampling procedure allows for greater variability in paths and stronger drift towards areas of high data density.
arXiv Detail & Related papers (2021-03-27T18:33:00Z) - New Bounds For Distributed Mean Estimation and Variance Reduction [25.815612182815702]
We consider the problem of distributed mean estimation (DME) in which $n$ machines are each given a local $d$-dimensional vector $x_v in mathbbRd$.
We show that our method yields practical improvements for common applications, relative to prior approaches.
arXiv Detail & Related papers (2020-02-21T13:27:13Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.