A new method for determining Wasserstein 1 optimal transport maps from
Kantorovich potentials, with deep learning applications
- URL: http://arxiv.org/abs/2211.00820v1
- Date: Wed, 2 Nov 2022 01:54:09 GMT
- Title: A new method for determining Wasserstein 1 optimal transport maps from
Kantorovich potentials, with deep learning applications
- Authors: Tristan Milne, \'Etienne Bilocq, Adrian Nachman
- Abstract summary: Wasserstein 1 optimal transport maps provide a natural correspondence between points from two probability distributions.
We present an approach towards computing Wasserstein 1 optimal transport maps that relies only on Kantorovich potentials.
- Score: 3.867363075280544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wasserstein 1 optimal transport maps provide a natural correspondence between
points from two probability distributions, $\mu$ and $\nu$, which is useful in
many applications. Available algorithms for computing these maps do not appear
to scale well to high dimensions. In deep learning applications, efficient
algorithms have been developed for approximating solutions of the dual problem,
known as Kantorovich potentials, using neural networks (e.g. [Gulrajani et al.,
2017]). Importantly, such algorithms work well in high dimensions. In this
paper we present an approach towards computing Wasserstein 1 optimal transport
maps that relies only on Kantorovich potentials. In general, a Wasserstein 1
optimal transport map is not unique and is not computable from a potential
alone. Our main result is to prove that if $\mu$ has a density and $\nu$ is
supported on a submanifold of codimension at least 2, an optimal transport map
is unique and can be written explicitly in terms of a potential. These
assumptions are natural in many image processing contexts and other
applications. When the Kantorovich potential is only known approximately, our
result motivates an iterative procedure wherein data is moved in optimal
directions and with the correct average displacement. Since this provides an
approach for transforming one distribution to another, it can be used as a
multipurpose algorithm for various transport problems; we demonstrate through
several proof of concept experiments that this algorithm successfully performs
various imaging tasks, such as denoising, generation, translation and
deblurring, which normally require specialized techniques.
Related papers
- Sequential transport maps using SoS density estimation and $α$-divergences [0.5999777817331317]
Transport-based density estimation methods are receiving growing interest because of their ability to efficiently generate samples from the approximated density.
We provide a new convergence analyses of the sequential transport maps based on information geometric properties of $alpha$-divergences.
We numerically demonstrate our methods on Bayesian inference problems and unsupervised learning tasks.
arXiv Detail & Related papers (2024-02-27T23:52:58Z) - Alignment of Density Maps in Wasserstein Distance [8.140400570642438]
We propose an algorithm for aligning three-dimensional objects when represented as density maps, motivated by applications in cryogenic electron microscopy.
The algorithm is based on minimizing the 1-Wasserstein distance between the density maps after a rigid transformation.
arXiv Detail & Related papers (2023-05-21T01:13:43Z) - Semi-supervised Learning of Pushforwards For Domain Translation &
Adaptation [3.800498098285222]
Given two probability densities on related data spaces, we seek a map pushing one density to the other.
For maps to have utility in a broad application space, the map must be available to apply on out-of-sample data points.
We introduce a novel pushforward map learning algorithm that utilizes normalizing flows to parameterize the map.
arXiv Detail & Related papers (2023-04-18T00:35:32Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Generative Modeling with Optimal Transport Maps [83.59805931374197]
Optimal Transport (OT) has become a powerful tool for large-scale generative modeling tasks.
We show that the OT map itself can be used as a generative model, providing comparable performance.
arXiv Detail & Related papers (2021-10-06T18:17:02Z) - Learning High Dimensional Wasserstein Geodesics [55.086626708837635]
We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.
By applying the method of Lagrange multipliers to the dynamic formulation of the optimal transport (OT) problem, we derive a minimax problem whose saddle point is the Wasserstein geodesic.
We then parametrize the functions by deep neural networks and design a sample based bidirectional learning algorithm for training.
arXiv Detail & Related papers (2021-02-05T04:25:28Z) - Differentiable Top-k Operator with Optimal Transport [135.36099648554054]
The SOFT top-k operator approximates the output of the top-k operation as the solution of an Entropic Optimal Transport (EOT) problem.
We apply the proposed operator to the k-nearest neighbors and beam search algorithms, and demonstrate improved performance.
arXiv Detail & Related papers (2020-02-16T04:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.