Riemannian Neural Optimal Transport
- URL: http://arxiv.org/abs/2602.03566v1
- Date: Tue, 03 Feb 2026 14:09:35 GMT
- Title: Riemannian Neural Optimal Transport
- Authors: Alessandro Micheli, Yueqi Cao, Anthea Monod, Samir Bhatt,
- Abstract summary: Computational optimal transport (OT) offers a principled framework for generative modeling.<n>Neural OT methods, which use neural networks to learn an OT map from data in an amortized way, can be evaluated out of sample after training.<n>Existing approaches are tailored to Euclidean geometry.
- Score: 40.19067516813213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational optimal transport (OT) offers a principled framework for generative modeling. Neural OT methods, which use neural networks to learn an OT map (or potential) from data in an amortized way, can be evaluated out of sample after training, but existing approaches are tailored to Euclidean geometry. Extending neural OT to high-dimensional Riemannian manifolds remains an open challenge. In this paper, we prove that any method for OT on manifolds that produces discrete approximations of transport maps necessarily suffers from the curse of dimensionality: achieving a fixed accuracy requires a number of parameters that grows exponentially with the manifold dimension. Motivated by this limitation, we introduce Riemannian Neural OT (RNOT) maps, which are continuous neural-network parameterizations of OT maps on manifolds that avoid discretization and incorporate geometric structure by construction. Under mild regularity assumptions, we prove that RNOT maps approximate Riemannian OT maps with sub-exponential complexity in the dimension. Experiments on synthetic and real datasets demonstrate improved scalability and competitive performance relative to discretization-based baselines.
Related papers
- Neural Local Wasserstein Regression [16.52489456261937]
We study the estimation problem of distribution-on-distribution regression, where both predictors and responses are probability measures.<n>Existing approaches typically rely on a global optimal transport map or tangent-space linearization.<n>We propose a flexible nonparametric framework that models regression through locally defined transport maps in Wasserstein space.
arXiv Detail & Related papers (2025-11-13T21:54:18Z) - Iso-Riemannian Optimization on Learned Data Manifolds [6.345340156849189]
We introduce a principled framework for optimization on learned data manifold using iso-Riemannian geometry.<n>We show that our approach yields interpretable barycentres, improved clustering, and provably efficient solutions to inverse problems.<n>These results establish that optimization under iso-Riemannian geometry can overcome distortions inherent to learned manifold mappings.
arXiv Detail & Related papers (2025-10-23T22:34:55Z) - Geodesic Calculus on Latent Spaces [4.023417156982924]
We develop tools for a discrete Riemannian calculus approximating classical geometric operators.<n>We learn an approximate projection onto the latent manifold by minimizing a denoising objective.<n>We evaluate our approach on various autoencoders trained on synthetic and real data.
arXiv Detail & Related papers (2025-10-10T15:25:03Z) - Geometric Operator Learning with Optimal Transport [77.16909146519227]
We propose integrating optimal transport (OT) into operator learning for partial differential equations (PDEs) on complex geometries.<n>For 3D simulations focused on surfaces, our OT-based neural operator embeds the surface geometry into a 2D parameterized latent space.<n> Experiments with Reynolds-averaged Navier-Stokes equations (RANS) on the ShapeNet-Car and DrivAerNet-Car datasets show that our method achieves better accuracy and also reduces computational expenses.
arXiv Detail & Related papers (2025-07-26T21:28:25Z) - Follow the Energy, Find the Path: Riemannian Metrics from Energy-Based Models [63.331590876872944]
We propose a method for deriving Riemannian metrics directly from pretrained Energy-Based Models.<n>These metrics define spatially varying distances, enabling the computation of geodesics.<n>We show that EBM-derived metrics consistently outperform established baselines.
arXiv Detail & Related papers (2025-05-23T12:18:08Z) - A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers [70.37553728026309]
In this paper, we establish upper bounds on the generalization error of an approximate OT map recovered by the minimax quadratic OT solver.<n>While our analysis focuses on the quadratic OT, we believe that similar bounds could be derived for general OT case, paving the promising direction for future research.
arXiv Detail & Related papers (2025-02-03T12:37:20Z) - Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis [7.373617024876726]
Several non-linear operators in analysis depend on a temporal structure which is not leveraged by contemporary neural operators.<n>This paper introduces a deep learning model-design framework that takes suitable infinite-dimensional linear metric spaces.<n>We show that our framework can uniformly approximate on compact sets and across arbitrarily finite-time horizons H" or smooth trace class operators.
arXiv Detail & Related papers (2022-10-24T14:43:03Z) - GeONet: a neural operator for learning the Wasserstein geodesic [13.468026138183623]
We present GeONet, a mesh-invariant deep neural operator network that learns the non-linear mapping from the input pair of initial and terminal distributions to the Wasserstein geodesic connecting the two endpoint distributions.
We demonstrate that GeONet achieves comparable testing accuracy to the standard OT solvers on simulation examples and the MNIST dataset with considerably reduced inference-stage computational cost by orders of magnitude.
arXiv Detail & Related papers (2022-09-28T21:55:40Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.