Likely Interpolants of Generative Models
- URL: http://arxiv.org/abs/2510.26266v1
- Date: Thu, 30 Oct 2025 08:46:53 GMT
- Title: Likely Interpolants of Generative Models
- Authors: Frederik Möbius Rygaard, Shen Zhu, Yinzhu Jin, Søren Hauberg, Tom Fletcher,
- Abstract summary: Interpolation in generative models allows for controlled generation, model inspection, and more.<n>Most generative models lack a principal notion of interpolants without restrictive assumptions on either the model or data dimension.<n>We develop a general scheme that targets likely transition paths compatible with different metrics and probability distributions.
- Score: 12.543430183696648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpolation in generative models allows for controlled generation, model inspection, and more. Unfortunately, most generative models lack a principal notion of interpolants without restrictive assumptions on either the model or data dimension. In this paper, we develop a general interpolation scheme that targets likely transition paths compatible with different metrics and probability distributions. We consider interpolants analogous to a geodesic constrained to a suitable data distribution and derive a novel algorithm for computing these curves, which requires no additional training. Theoretically, we show that our method locally can be considered as a geodesic under a suitable Riemannian metric. We quantitatively show that our interpolation scheme traverses higher density regions than baselines across a range of models and datasets.
Related papers
- Neural Local Wasserstein Regression [16.52489456261937]
We study the estimation problem of distribution-on-distribution regression, where both predictors and responses are probability measures.<n>Existing approaches typically rely on a global optimal transport map or tangent-space linearization.<n>We propose a flexible nonparametric framework that models regression through locally defined transport maps in Wasserstein space.
arXiv Detail & Related papers (2025-11-13T21:54:18Z) - Enforcing Latent Euclidean Geometry in Single-Cell VAEs for Manifold Interpolation [79.27003481818413]
We introduce FlatVI, a training framework that regularises the latent manifold of discrete-likelihood variational autoencoders towards Euclidean geometry.<n>By encouraging straight lines in the latent space to approximate geodesics on the decoded single-cell manifold, FlatVI enhances compatibility with downstream approaches.
arXiv Detail & Related papers (2025-07-15T23:08:14Z) - (Deep) Generative Geodesics [57.635187092922976]
We introduce a newian metric to assess the similarity between any two data points.
Our metric leads to the conceptual definition of generative distances and generative geodesics.
Their approximations are proven to converge to their true values under mild conditions.
arXiv Detail & Related papers (2024-07-15T21:14:02Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Metric Flow Matching for Smooth Interpolations on the Data Manifold [40.24392451848883]
Metric Flow Matching (MFM) is a novel simulation-free framework for conditional flow matching.
We propose MFM as a framework for conditional paths that transform a source distribution into a target distribution.
We test MFM on a suite of challenges including LiDAR navigation, unpaired image translation, and modeling cellular dynamics.
arXiv Detail & Related papers (2024-05-23T16:48:06Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A likelihood approach to nonparametric estimation of a singular
distribution using deep generative models [4.329951775163721]
We investigate a likelihood approach to nonparametric estimation of a singular distribution using deep generative models.
We prove that a novel and effective solution exists by perturbing the data with an instance noise.
We also characterize the class of distributions that can be efficiently estimated via deep generative models.
arXiv Detail & Related papers (2021-05-09T23:13:58Z) - On Linear Interpolation in the Latent Space of Deep Generative Models [0.0]
Smoothness and plausibility of linears in latent space are associated with the quality of the underlying generative model.
We show that not all such curves are comparable as they can deviate arbitrarily from the shortest curve given by the geodesic.
This deviation is revealed by computing curve lengths with the pull-back metric of the generative model.
arXiv Detail & Related papers (2021-05-08T10:27:07Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Uniform Interpolation Constrained Geodesic Learning on Data Manifold [28.509561636926414]
Along the learned geodesic, our method can generate high-qualitys between two given data samples.
We provide a theoretical analysis of our model and use image translation as an example to demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-02-12T07:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.