Proximal optimal transport divergences
- URL: http://arxiv.org/abs/2505.12097v1
- Date: Sat, 17 May 2025 17:48:11 GMT
- Title: Proximal optimal transport divergences
- Authors: Ricardo Baptista, Panagiota Birmpa, Markos A. Katsoulakis, Luc Rey-Bellet, Benjamin J. Zhang,
- Abstract summary: We introduce proximal optimal transport divergence, a novel discrepancy measure that interpolates between information divergences and optimal transport distances via an infimal convolution formulation.<n>We explore its mathematical properties, including smoothness, boundedness, and computational tractability, and establish connections to primal-dual formulation and adversarial learning.<n>Our framework generalizes existing approaches while offering new insights and computational tools for generative modeling, distributional optimization, and gradient-based learning in probability spaces.
- Score: 6.6875717609310765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce proximal optimal transport divergence, a novel discrepancy measure that interpolates between information divergences and optimal transport distances via an infimal convolution formulation. This divergence provides a principled foundation for optimal transport proximals and proximal optimization methods frequently used in generative modeling. We explore its mathematical properties, including smoothness, boundedness, and computational tractability, and establish connections to primal-dual formulation and adversarial learning. Building on the Benamou-Brenier dynamic formulation of optimal transport cost, we also establish a dynamic formulation for proximal OT divergences. The resulting dynamic formulation is a first order mean-field game whose optimality conditions are governed by a pair of nonlinear partial differential equations, a backward Hamilton-Jacobi and a forward continuity partial differential equations. Our framework generalizes existing approaches while offering new insights and computational tools for generative modeling, distributional optimization, and gradient-based learning in probability spaces.
Related papers
- Divergence Minimization Preference Optimization for Diffusion Model Alignment [58.651951388346525]
Divergence Minimization Preference Optimization (DMPO) is a principled method for aligning diffusion models by minimizing reverse KL divergence.<n>Our results show that diffusion models fine-tuned with DMPO can consistently outperform or match existing techniques.<n>DMPO unlocks a robust and elegant pathway for preference alignment, bridging principled theory with practical performance in diffusion models.
arXiv Detail & Related papers (2025-07-10T07:57:30Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.<n>Our guidance is a preference model trained to predict the probability that one design dominates another.<n>Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently [14.262270388108112]
We propose a new framework for formulating optimal transport distances between Markov chains.
We show that calculating optimal transport distances in the full space of joint distributions can be equivalently formulated as solving a linear program.
arXiv Detail & Related papers (2024-06-06T13:25:14Z) - Constrained Synthesis with Projected Diffusion Models [47.56192362295252]
This paper introduces an approach to generative diffusion processes the ability to satisfy and certify compliance with constraints and physical principles.
The proposed method recast the traditional process of generative diffusion as a constrained distribution problem to ensure adherence to constraints.
arXiv Detail & Related papers (2024-02-05T22:18:16Z) - A Computational Framework for Solving Wasserstein Lagrangian Flows [48.87656245464521]
In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging.
We propose a novel deep learning based framework approaching all of these problems from a unified perspective.
We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference.
arXiv Detail & Related papers (2023-10-16T17:59:54Z) - Unifying Distributionally Robust Optimization via Optimal Transport
Theory [13.19058156672392]
This paper introduces a novel approach that unifies these methods into a single framework based on optimal transport.
Our proposed approach makes it possible for optimal adversarial distributions to simultaneously perturb likelihood and outcomes.
The paper investigates several duality results and presents tractable reformulations that enhance the practical applicability of this unified framework.
arXiv Detail & Related papers (2023-08-10T08:17:55Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - On the Existence of Optimal Transport Gradient for Learning Generative
Models [8.602553195689513]
Training of Wasserstein Generative Adversarial Networks (WGAN) relies on the calculation of the gradient of the optimal transport cost.
We first demonstrate that such gradient may not be defined, which can result in numerical instabilities during gradient-based optimization.
By exploiting the discrete nature of empirical data, we formulate the gradient in a semi-discrete setting and propose an algorithm for the optimization of the generative model parameters.
arXiv Detail & Related papers (2021-02-10T16:28:20Z) - On the Connection between Dynamical Optimal Transport and Functional
Lifting [0.0]
In this work, we investigate a mathematically rigorous formulation based on embedding into the space over a fixed range $Gamma$
By modifying the continuity equation, the approach can be extended to models with higher-order regularization.
arXiv Detail & Related papers (2020-07-06T08:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.