GradNetOT: Learning Optimal Transport Maps with GradNets
- URL: http://arxiv.org/abs/2507.13191v1
- Date: Thu, 17 Jul 2025 14:59:24 GMT
- Title: GradNetOT: Learning Optimal Transport Maps with GradNets
- Authors: Shreyas Chaudhari, Srinivasa Pranav, José M. F. Moura,
- Abstract summary: In [arXiv:2301.10862] [arXiv:2404.07361], we proposed Monotone Gradient Networks (mGradNets), neural networks that directly parameterize the space of monotone gradient maps.<n>We empirically show that the structural bias of mGradNets facilitates the learning of optimal transport maps and employ our method for a robot swarm control problem.
- Score: 11.930694410868435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monotone gradient functions play a central role in solving the Monge formulation of the optimal transport problem, which arises in modern applications ranging from fluid dynamics to robot swarm control. When the transport cost is the squared Euclidean distance, Brenier's theorem guarantees that the unique optimal map is the gradient of a convex function, namely a monotone gradient map, and it satisfies a Monge-Amp\`ere equation. In [arXiv:2301.10862] [arXiv:2404.07361], we proposed Monotone Gradient Networks (mGradNets), neural networks that directly parameterize the space of monotone gradient maps. In this work, we leverage mGradNets to directly learn the optimal transport mapping by minimizing a training loss function defined using the Monge-Amp\`ere equation. We empirically show that the structural bias of mGradNets facilitates the learning of optimal transport maps and employ our method for a robot swarm control problem.
Related papers
- Gradient Networks [11.930694410868435]
We provide a comprehensive GradNet design framework to represent convex gradients.<n>We show that GradNets can approximate neural gradient functions.<n>We also show that monotone GradNets provide efficient parameterizations and outperform existing methods.
arXiv Detail & Related papers (2024-04-10T21:36:59Z) - Normalizing flows as approximations of optimal transport maps via linear-control neural ODEs [49.1574468325115]
We consider the problem of recovering the $Wimat$-optimal transport map T between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
arXiv Detail & Related papers (2023-11-02T17:17:03Z) - Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference [1.740133468405535]
We present two solutions of solutions of $D1450F454545454545454545454545454545454545454545454545454545454545454545454545454545454545454545454 5454545454545454545454545454545454545454545454545454545454545454545454545454545454545454545454545454 54545454545454545454545454545454545
arXiv Detail & Related papers (2023-10-25T20:20:09Z) - Learning Gradients of Convex Functions with Monotone Gradient Networks [5.220940151628734]
gradients of convex functions have critical applications ranging from gradient-based optimization to optimal transport.
Recent works have explored data-driven methods for learning convex objectives, but learning their monotone gradients is seldom studied.
We show that our networks are simpler to train, learn monotone gradient fields more accurately, and use significantly fewer parameters than state of the art methods.
arXiv Detail & Related papers (2023-01-25T23:04:50Z) - Universal Neural Optimal Transport [0.0]
UNOT (Universal Neural Optimal Transport) is a novel framework capable of accurately predicting (entropic) OT distances and plans between discrete measures for a given cost function.<n>We show that our network can be used as a state-of-the-art initialization for the Sinkhorn algorithm with speedups of up to $7.4times$.
arXiv Detail & Related papers (2022-11-30T21:56:09Z) - Fast $L^2$ optimal mass transport via reduced basis methods for the
Monge-Amp$\grave{\rm e}$re equation [0.0]
We propose a machine learning-like method for solving the parameterized Monge-Amp$graverm e$re equation.
Several challenging numerical tests demonstrate the accuracy and high efficiency of our method for solving the Monge-Amp$graverm e$re equation.
arXiv Detail & Related papers (2021-12-03T12:30:46Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Self Sparse Generative Adversarial Networks [73.590634413751]
Generative Adversarial Networks (GANs) are an unsupervised generative model that learns data distribution through adversarial training.
We propose a Self Sparse Generative Adversarial Network (Self-Sparse GAN) that reduces the parameter space and alleviates the zero gradient problem.
arXiv Detail & Related papers (2021-01-26T04:49:12Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.