Lipschitz-Guided Design of Interpolation Schedules in Generative Models
- URL: http://arxiv.org/abs/2509.01629v1
- Date: Mon, 01 Sep 2025 17:16:34 GMT
- Title: Lipschitz-Guided Design of Interpolation Schedules in Generative Models
- Authors: Yifan Chen, Eric Vanden-Eijnden, Jiawei Xu,
- Abstract summary: We study the design of schedules in the interpolants framework for flow and diffusion-based generative models.<n>We show that while all scalar schedules achieve identical statistical efficiency, their numerical efficiency can differ substantially.<n>This observation motivates focusing on the numerical properties of the resulting drift fields rather than statistical criteria for schedule design.
- Score: 21.63464119819874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the design of interpolation schedules in the stochastic interpolants framework for flow and diffusion-based generative models. We show that while all scalar interpolation schedules achieve identical statistical efficiency under Kullback-Leibler divergence in path space after optimal diffusion coefficient tuning, their numerical efficiency can differ substantially. This observation motivates focusing on numerical properties of the resulting drift fields rather than statistical criteria for schedule design. We propose averaged squared Lipschitzness minimization as a principled criterion for numerical optimization, providing an alternative to kinetic energy minimization used in optimal transport approaches. A transfer formula is derived that enables conversion between different schedules at inference time without retraining neural networks. For Gaussian distributions, our optimized schedules achieve exponential improvements in Lipschitz constants over standard linear schedules, while for Gaussian mixtures, they reduce mode collapse in few-step sampling. We also validate our approach on high-dimensional invariant distributions from stochastic Allen-Cahn equations and Navier-Stokes equations, demonstrating robust performance improvements across resolutions.
Related papers
- Fast Sampling for Flows and Diffusions with Lazy and Point Mass Stochastic Interpolants [5.492889521988414]
We prove how to convert a sample path of a differential equation (SDE) with arbitrary diffusion coefficient under any schedule.<n>We then extend the interpolant framework to admit a larger class of point mass schedules.
arXiv Detail & Related papers (2026-02-03T17:48:34Z) - Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Efficient Diffusion Models for Symmetric Manifolds [25.99200001269046]
We introduce a framework for designing efficient diffusion models for $d$-dimensional symmetric-space.<n>Mandela symmetries ensure the diffusion satisfies an "average-case" Lipschitz condition.<n>Our model outperforms prior methods in training speed and improves sample quality on synthetic datasets.
arXiv Detail & Related papers (2025-05-27T18:12:29Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Momentum Particle Maximum Likelihood [2.4561590439700076]
We propose an analogous dynamical-systems-inspired approach to minimizing the free energy functional.
By discretizing the system, we obtain a practical algorithm for Maximum likelihood estimation in latent variable models.
The algorithm outperforms existing particle methods in numerical experiments and compares favourably with other MLE algorithms.
arXiv Detail & Related papers (2023-12-12T14:53:18Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Sinkhorn Distributionally Robust Optimization [18.46110328123008]
Sinkhorn distance is a variant of Wasserstein distance based on entropic regularization.<n>We derive a convex programming dual reformulation for general nominal distributions, transport costs, and loss functions.
arXiv Detail & Related papers (2021-09-24T12:40:48Z) - Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for
Heterogeneous Distributed Datasets [14.945821529085896]
We study distributed optimization algorithms for minimizing the average of emphterogeneous functions with a focus on communication efficiency.
We propose a novel emphadaptive sampling of machines specially catered to these settings.
We show that the new way improves the dependence of convergence rate from maximum Lipschitz constant to emphaverage Lipschitz constant across machines.
arXiv Detail & Related papers (2020-02-20T01:55:52Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.