Manifold optimization for optimal transport
- URL: http://arxiv.org/abs/2103.00902v1
- Date: Mon, 1 Mar 2021 10:49:19 GMT
- Title: Manifold optimization for optimal transport
- Authors: Bamdev Mishra, N T V Satya Dev, Hiroyuki Kasai, and Pratik Jawanpuria
- Abstract summary: Optimal transport (OT) has recently found widespread interest in machine learning.
We discuss how to approach OT problems within the framework of the manifold optimization.
We make available the Manifold optimization-based Optimal Transport, or MOT, repository with codes useful in solving OT problems in Python and Matlab.
- Score: 34.88495814664577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimal transport (OT) has recently found widespread interest in machine
learning. It allows to define novel distances between probability measures,
which have shown promise in several applications. In this work, we discuss how
to computationally approach OT problems within the framework of the Riemannian
manifold optimization. The basis of this is the manifold of doubly stochastic
matrices (and its generalization). Even though the manifold geometry is not
new, surprisingly, its usefulness for solving OT problems has not been
considered. To this end, we specifically discuss optimization-related
ingredients that allow modeling the OT problem on smooth Riemannian manifolds
by exploiting the geometry of the search space. We also discuss extensions
where we reuse the developed optimization ingredients. We make available the
Manifold optimization-based Optimal Transport, or MOT, repository with codes
useful in solving OT problems in Python and Matlab. The codes are available at
https://github.com/SatyadevNtv/MOT.
Related papers
- Riemannian coordinate descent algorithms on matrix manifolds [12.05722932030768]
We provide a general framework for developing computationally efficient coordinate descent (CD) algorithms on matrix manifold.
We propose CD algorithms for various manifold such as Stiefel, Grassmann, (generalized) hyperbolic, symplectic, and symmetric positive (semi)definite.
We analyze their convergence and complexity, and empirically illustrate their efficacy in several applications.
arXiv Detail & Related papers (2024-06-04T11:37:11Z) - Anchor Space Optimal Transport: Accelerating Batch Processing of
Multiple OT Problems [13.846014191157405]
We propose a translated OT problem designated as the anchor space optimal transport (ASOT) problem.
For the proposed ASOT problem, the distributions will be mapped into a shared anchor point space, which learns the potential common characteristics.
Based on the proposed ASOT, the Wasserstein distance error to the original OT problem is proven to be bounded by ground cost errors.
arXiv Detail & Related papers (2023-10-24T18:55:12Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Building the Bridge of Schr\"odinger: A Continuous Entropic Optimal
Transport Benchmark [96.06787302688595]
We propose a novel way to create pairs of probability distributions for which the ground truth OT solution is known by the construction.
We use these benchmark pairs to test how well existing neural EOT/SB solvers actually compute the EOT solution.
arXiv Detail & Related papers (2023-06-16T20:03:36Z) - A Survey of Geometric Optimization for Deep Learning: From Euclidean
Space to Riemannian Manifold [7.737713458418288]
Deep Learning (DL) has achieved success in complex Artificial Intelligence (AI) tasks, but it suffers from various notorious problems.
This article presents a comprehensive survey of applying geometric optimization in DL.
It investigates the application of geometric optimization in different DL networks in various AI tasks, e.g., convolution neural network, recurrent neural network, transfer learning, and optimal transport.
arXiv Detail & Related papers (2023-02-16T10:50:15Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - Meta Optimal Transport [24.69258558871181]
We study the use of amortized optimization to predict optimal transport maps from the input measures, which we call Meta OT.
This helps repeatedly solve similar OT problems between different measures by leveraging the knowledge and information present from past problems to rapidly predict and solve new problems.
arXiv Detail & Related papers (2022-06-10T17:59:07Z) - Geometry-aware Bayesian Optimization in Robotics using Riemannian
Mat\'ern Kernels [64.62221198500467]
We show how to implement geometry-aware kernels for Bayesian optimization.
This technique can be used for control parameter tuning, parametric policy adaptation, and structure design in robotics.
arXiv Detail & Related papers (2021-11-02T09:47:22Z) - A Survey on Optimal Transport for Machine Learning: Theory and
Applications [1.1279808969568252]
Optimal Transport (OT) theory has seen an increasing amount of attention from the computer science community.
We present a brief introduction and history, a survey of previous work and propose directions of future study.
arXiv Detail & Related papers (2021-06-03T16:10:42Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.