Unbalanced Low-rank Optimal Transport Solvers
- URL: http://arxiv.org/abs/2305.19727v1
- Date: Wed, 31 May 2023 10:39:51 GMT
- Title: Unbalanced Low-rank Optimal Transport Solvers
- Authors: Meyer Scetbon, Michal Klein, Giovanni Palla, Marco Cuturi
- Abstract summary: We propose algorithms to implement extensions for the linear OT problem and its Fused-Gromov-Wasserstein generalization.
The goal of this paper is to merge these two strains, to achieve the promise of textitboth versatile/scalable unbalanced/low-rank OT solvers.
- Score: 38.79369155558385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The relevance of optimal transport methods to machine learning has long been
hindered by two salient limitations. First, the $O(n^3)$ computational cost of
standard sample-based solvers (when used on batches of $n$ samples) is
prohibitive. Second, the mass conservation constraint makes OT solvers too
rigid in practice: because they must match \textit{all} points from both
measures, their output can be heavily influenced by outliers. A flurry of
recent works in OT has addressed these computational and modelling limitations,
but has resulted in two separate strains of methods: While the computational
outlook was much improved by entropic regularization, more recent $O(n)$
linear-time \textit{low-rank} solvers hold the promise to scale up OT further.
On the other hand, modelling rigidities have been eased owing to unbalanced
variants of OT, that rely on penalization terms to promote, rather than impose,
mass conservation. The goal of this paper is to merge these two strains, to
achieve the promise of \textit{both} versatile/scalable unbalanced/low-rank OT
solvers. We propose custom algorithms to implement these extensions for the
linear OT problem and its Fused-Gromov-Wasserstein generalization, and
demonstrate their practical relevance to challenging spatial transcriptomics
matching problems.
Related papers
- Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport [84.51977664336056]
We propose a novel, scalable approach for estimating the textitrobust continuous barycenter.
Our method is framed as a $min$-$max$ optimization problem and is adaptable to textitgeneral cost function.
arXiv Detail & Related papers (2024-10-04T23:27:33Z) - Unbalanced Optimal Transport meets Sliced-Wasserstein [11.44982599214965]
We propose two new loss functions based on the idea of slicing unbalanced OT, and study their induced topology and statistical properties.
We show that the resulting methodology is modular as it encompasses and extends prior related work.
arXiv Detail & Related papers (2023-06-12T15:15:00Z) - Robust computation of optimal transport by $\beta$-potential
regularization [79.24513412588745]
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions.
We propose regularizing OT with the beta-potential term associated with the so-called $beta$-divergence.
We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers.
arXiv Detail & Related papers (2022-12-26T18:37:28Z) - Unbalanced Optimal Transport, from Theory to Numerics [0.0]
We argue that unbalanced OT, entropic regularization and Gromov-Wasserstein (GW) can work hand-in-hand to turn OT into efficient geometric loss functions for data sciences.
The main motivation for this review is to explain how unbalanced OT, entropic regularization and GW can work hand-in-hand to turn OT into efficient geometric loss functions for data sciences.
arXiv Detail & Related papers (2022-11-16T09:02:52Z) - Sparsity-Constrained Optimal Transport [27.76137474217754]
Regularized optimal transport is now increasingly used as a loss or as a matching layer in neural networks.
We propose a new approach for OT with explicit cardinality constraints on the transportation plan.
Our method can be thought as a middle ground between unregularized OT (recovered in the case $k$) and quadratically-regularized OT (recovered when $k$ is large enough)
arXiv Detail & Related papers (2022-09-30T13:39:47Z) - Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport? [138.1080446991979]
Wasserstein Generative Adversarial Networks (WGANs) are the popular generative models built on the theory of Optimal Transport (OT) and the Kantorovich duality.
Despite the success of WGANs, it is still unclear how well the underlying OT dual solvers approximate the OT cost (Wasserstein-1 distance, $mathbbW_1$) and the OT gradient needed to update the generator.
We construct 1-Lipschitz functions and use them to build ray monotone transport plans. This strategy yields pairs of continuous benchmark distributions with the analytically known OT plan, OT cost and OT
arXiv Detail & Related papers (2022-06-15T19:07:46Z) - Low-rank Optimal Transport: Approximation, Statistics and Debiasing [51.50788603386766]
Low-rank optimal transport (LOT) approach advocated in citescetbon 2021lowrank
LOT is seen as a legitimate contender to entropic regularization when compared on properties of interest.
We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT.
arXiv Detail & Related papers (2022-05-24T20:51:37Z) - Lazy Lagrangians with Predictions for Online Learning [24.18464455081512]
We consider the general problem of online convex optimization with time-varying additive constraints.
A novel primal-dual algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction-adaptive dynamic steps.
Our work extends the FTRL framework for this constrained OCO setting and outperforms the respective state-of-the-art greedy-based solutions.
arXiv Detail & Related papers (2022-01-08T21:49:10Z) - Robust Optimal Transport with Applications in Generative Modeling and
Domain Adaptation [120.69747175899421]
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.
We propose a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.
Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions.
arXiv Detail & Related papers (2020-10-12T17:13:40Z) - Regularized Optimal Transport is Ground Cost Adversarial [34.81915836064636]
We show that regularization of the optimal transport problem can be interpreted as ground cost adversarial.
This gives access to a robust dissimilarity measure on the ground space, which can in turn be used in other applications.
arXiv Detail & Related papers (2020-02-10T17:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.