Scalable Computation of Monge Maps with General Costs
- URL: http://arxiv.org/abs/2106.03812v1
- Date: Mon, 7 Jun 2021 17:23:24 GMT
- Title: Scalable Computation of Monge Maps with General Costs
- Authors: Jiaojiao Fan, Shu Liu, Shaojun Ma, Yongxin Chen, Haomin Zhou
- Abstract summary: Monge map refers to the optimal transport map between two probability distributions.
We present a scalable algorithm for computing the Monge map between two probability distributions.
- Score: 12.273462158073302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monge map refers to the optimal transport map between two probability
distributions and provides a principled approach to transform one distribution
to another. In spite of the rapid developments of the numerical methods for
optimal transport problems, computing the Monge maps remains challenging,
especially for high dimensional problems. In this paper, we present a scalable
algorithm for computing the Monge map between two probability distributions.
Our algorithm is based on a weak form of the optimal transport problem, thus it
only requires samples from the marginals instead of their analytic expressions,
and can accommodate optimal transport between two distributions with different
dimensions. Our algorithm is suitable for general cost functions, compared with
other existing methods for estimating Monge maps using samples, which are
usually for quadratic costs. The performance of our algorithms is demonstrated
through a series of experiments with both synthetic and realistic data.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference [1.740133468405535]
We present two neural network approaches that approximate the solutions of static and conditional optimal transport (COT) problems.
We demonstrate both algorithms, comparing them with competing state-the-art approaches, using benchmark datasets and simulation-based inverse problems.
arXiv Detail & Related papers (2023-10-25T20:20:09Z) - Generalized Schrödinger Bridge Matching [54.171931505066]
Generalized Schr"odinger Bridge (GSB) problem setup is prevalent in many scientific areas both within and without machine learning.
We propose Generalized Schr"odinger Bridge Matching (GSBM), a new matching algorithm inspired by recent advances.
We show that such a generalization can be cast as solving conditional optimal control, for which variational approximations can be used.
arXiv Detail & Related papers (2023-10-03T17:42:11Z) - Semi-supervised Learning of Pushforwards For Domain Translation &
Adaptation [3.800498098285222]
Given two probability densities on related data spaces, we seek a map pushing one density to the other.
For maps to have utility in a broad application space, the map must be available to apply on out-of-sample data points.
We introduce a novel pushforward map learning algorithm that utilizes normalizing flows to parameterize the map.
arXiv Detail & Related papers (2023-04-18T00:35:32Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Minimax estimation of discontinuous optimal transport maps: The
semi-discrete case [14.333765302506658]
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $mathbb Rd$.
We show that an estimator based on entropic optimal transport converges at the minimax-optimal rate $n-1/2$, independent of dimension.
arXiv Detail & Related papers (2023-01-26T18:41:38Z) - Unpaired Image Super-Resolution with Optimal Transport Maps [128.1189695209663]
Real-world image super-resolution (SR) tasks often do not have paired datasets limiting the application of supervised techniques.
We propose an algorithm for unpaired SR which learns an unbiased OT map for the perceptual transport cost.
Our algorithm provides nearly state-of-the-art performance on the large-scale unpaired AIM-19 dataset.
arXiv Detail & Related papers (2022-02-02T16:21:20Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.