On the complexity of the optimal transport problem with graph-structured
cost
- URL: http://arxiv.org/abs/2110.00627v1
- Date: Fri, 1 Oct 2021 19:29:59 GMT
- Title: On the complexity of the optimal transport problem with graph-structured
cost
- Authors: Jiaojiao Fan, Isabel Haasler, Johan Karlsson, Yongxin Chen
- Abstract summary: Multi-marginal optimal transport (MOT) is a generalization of optimal transport to multiple marginals.
The usage of MOT has been largely impeded by its computational complexity which scales exponentially in the number of marginals.
- Score: 9.24979291231758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-marginal optimal transport (MOT) is a generalization of optimal
transport to multiple marginals. Optimal transport has evolved into an
important tool in many machine learning applications, and its multi-marginal
extension opens up for addressing new challenges in the field of machine
learning. However, the usage of MOT has been largely impeded by its
computational complexity which scales exponentially in the number of marginals.
Fortunately, in many applications, such as barycenter or interpolation
problems, the cost function adheres to structures, which has recently been
exploited for developing efficient computational methods. In this work we
derive computational bounds for these methods. With $m$ marginal distributions
supported on $n$ points, we provide a $ \mathcal{\tilde O}(d(G)m
n^2\epsilon^{-2})$ bound for a $\epsilon$-accuracy when the problem is
associated with a tree with diameter $d(G)$. For the special case of the
Wasserstein barycenter problem, which corresponds to a star-shaped tree, our
bound is in alignment with the existing complexity bound for it.
Related papers
- Fast and scalable Wasserstein-1 neural optimal transport solver for single-cell perturbation prediction [55.89763969583124]
Optimal transport theory provides a principled framework for constructing such mappings.
We propose a novel optimal transport solver based on Wasserstein-1.
Our experiments demonstrate that the proposed solver can mimic the $W$ OT solvers in finding a unique and monotonic" map on 2D datasets.
arXiv Detail & Related papers (2024-11-01T14:23:19Z) - Progressive Entropic Optimal Transport Solvers [33.821924561619895]
We propose a new class of EOT solvers (ProgOT) that can estimate both plans and transport maps.
We provide experimental evidence demonstrating that ProgOT is a faster and more robust alternative to standard solvers.
We also prove statistical consistency of our approach for estimating optimal transport maps.
arXiv Detail & Related papers (2024-06-07T16:33:08Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Optimal Gradient Sliding and its Application to Distributed Optimization
Under Similarity [121.83085611327654]
We structured convex optimization problems with additive objective $r:=p + q$, where $r$ is $mu$-strong convex similarity.
We proposed a method to solve problems master to agents' communication and local calls.
The proposed method is much sharper than the $mathcalO(sqrtL_q/mu)$ method.
arXiv Detail & Related papers (2022-05-30T14:28:02Z) - Approximative Algorithms for Multi-Marginal Optimal Transport and
Free-Support Wasserstein Barycenters [0.0]
We present two algorithms to approximate the solution of multi-marginal optimal transport (MOT) with squared Euclidean costs for $N$ discrete probability measures.
They are fast, memory-efficient and easy to implement and can be used with any sparse OT solver as a black box.
arXiv Detail & Related papers (2022-02-02T10:59:54Z) - On Multimarginal Partial Optimal Transport: Equivalent Forms and
Computational Complexity [11.280177531118206]
We study the multi-marginal partial optimal transport (POT) problem between $m$ discrete (unbalanced) measures with at most $n$ supports.
We first prove that we can obtain two equivalence forms of the multimarginal POT problem in terms of the multimarginal optimal transport problem via novel extensions of cost tensor.
We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $tildemathcalO(m3(n+1)m/ var
arXiv Detail & Related papers (2021-08-18T06:46:59Z) - Genetic column generation: Fast computation of high-dimensional
multi-marginal optimal transport problems [0.0]
We use a genetic learning method tailormade for MMOT in which the dual state within CG plays the role of an "adversary"
On a sequence of benchmark problems with up to 120 gridpoints and up to 30 marginals, our method always found the exacts.
arXiv Detail & Related papers (2021-03-23T15:24:50Z) - A Riemannian Block Coordinate Descent Method for Computing the
Projection Robust Wasserstein Distance [36.97843660480747]
The Wasserstein distance has become increasingly important in machine learning and deep learning.
A recently proposed approach to alleviate the curse of dimensionality is to project the sampled data onto a lower-dimensional subspace, and then compute the Wasserstein distance between the projected data.
However, this approach requires to solve a max-min problem over the Stiefel manifold, which is very challenging in practice.
We propose a novel block coordinate descent (RBCD) method to solve this problem, which is based on a novel reformulation of the regularized max-min problem over the Stiefel manifold.
arXiv Detail & Related papers (2020-12-09T17:47:56Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.