Learning Optimal Transport Between two Empirical Distributions with
Normalizing Flows
- URL: http://arxiv.org/abs/2207.01246v2
- Date: Tue, 5 Jul 2022 07:29:23 GMT
- Title: Learning Optimal Transport Between two Empirical Distributions with
Normalizing Flows
- Authors: Florentin Coeurdoux, Nicolas Dobigeon, Pierre Chainais
- Abstract summary: We propose to leverage the flexibility of neural networks to learn an approximate optimal transport map.
We show that a particular instance of invertible neural networks, namely the normalizing flows, can be used to approximate the solution of this OT problem.
- Score: 12.91637880428221
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimal transport (OT) provides effective tools for comparing and mapping
probability measures. We propose to leverage the flexibility of neural networks
to learn an approximate optimal transport map. More precisely, we present a new
and original method to address the problem of transporting a finite set of
samples associated with a first underlying unknown distribution towards another
finite set of samples drawn from another unknown distribution. We show that a
particular instance of invertible neural networks, namely the normalizing
flows, can be used to approximate the solution of this OT problem between a
pair of empirical distributions. To this aim, we propose to relax the Monge
formulation of OT by replacing the equality constraint on the push-forward
measure by the minimization of the corresponding Wasserstein distance. The
push-forward operator to be retrieved is then restricted to be a normalizing
flow which is trained by optimizing the resulting cost function. This approach
allows the transport map to be discretized as a composition of functions. Each
of these functions is associated to one sub-flow of the network, whose output
provides intermediate steps of the transport between the original and target
measures. This discretization yields also a set of intermediate barycenters
between the two measures of interest. Experiments conducted on toy examples as
well as a challenging task of unsupervised translation demonstrate the interest
of the proposed method. Finally, some experiments show that the proposed
approach leads to a good approximation of the true OT.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Arbitrary Distributions Mapping via SyMOT-Flow: A Flow-based Approach Integrating Maximum Mean Discrepancy and Optimal Transport [2.7309692684728617]
We introduce a novel model called SyMOT-Flow that trains an invertible transformation by minimizing the symmetric maximum mean discrepancy between samples from two unknown distributions.
The resulting transformation leads to more stable and accurate sample generation.
arXiv Detail & Related papers (2023-08-26T08:39:16Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - On Sampling with Approximate Transport Maps [22.03230737620495]
Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle.
The potential of this approach has risen with the development of Normalizing Flows (NF) which are maps parameterized with deep neural networks trained to push a reference distribution towards a target.
NF-enhanced samplers recently proposed blend (Markov chain) Monte Carlo methods with either (i) proposal draws from the flow or (ii) a flow-based reparametrization.
arXiv Detail & Related papers (2023-02-09T16:52:52Z) - Relative Entropy-Regularized Optimal Transport on a Graph: a new
algorithm and an experimental comparison [0.0]
The present work investigates a new relative entropy-regularized algorithm for solving the optimal transport on a graph problem within the randomized shortest paths formalism.
The main advantage of this new formulation is the fact that it can easily accommodate edge flow capacity constraints.
The resulting optimal routing policy, i.e., the probability distribution of following an edge in each node, is Markovian and is computed by constraining the input and output flows to the prescribed marginal probabilities.
arXiv Detail & Related papers (2021-08-23T08:25:51Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - CO-Optimal Transport [19.267807479856575]
Optimal transport (OT) is a powerful tool for finding correspondences and measuring similarity between two distributions.
We propose a novel OT problem, named COOT for CO- Optimal Transport, that simultaneously optimize two transport maps between both samples and features.
We demonstrate its versatility with two machine learning applications in heterogeneous domain adaptation and co-clustering/data summarization.
arXiv Detail & Related papers (2020-02-10T13:33:15Z) - Statistical Optimal Transport posed as Learning Kernel Embedding [0.0]
This work takes the novel approach of posing statistical Optimal Transport (OT) as that of learning the transport plan's kernel mean embedding from sample based estimates of marginal embeddings.
A key result is that, under very mild conditions, $epsilon$-optimal recovery of the transport plan as well as the Barycentric-projection based transport map is possible with a sample complexity that is completely dimension-free.
arXiv Detail & Related papers (2020-02-08T14:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.