Normalizing flows as approximations of optimal transport maps via
linear-control neural ODEs
- URL: http://arxiv.org/abs/2311.01404v2
- Date: Fri, 17 Nov 2023 11:06:52 GMT
- Title: Normalizing flows as approximations of optimal transport maps via
linear-control neural ODEs
- Authors: Alessandro Scagliotti, Sara Farinelli
- Abstract summary: "Normalizing Flows" is related to the task of constructing invertible transport maps between probability measures by means of deep neural networks.
We consider the problem of recovering the $W$-optimal transport map $T$ between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The term "Normalizing Flows" is related to the task of constructing
invertible transport maps between probability measures by means of deep neural
networks. In this paper, we consider the problem of recovering the
$W_2$-optimal transport map $T$ between absolutely continuous measures
$\mu,\nu\in\mathcal{P}(\mathbb{R}^n)$ as the flow of a linear-control neural
ODE. We first show that, under suitable assumptions on $\mu,\nu$ and on the
controlled vector fields, the optimal transport map is contained in the
$C^0_c$-closure of the flows generated by the system. Assuming that discrete
approximations $\mu_N,\nu_N$ of the original measures $\mu,\nu$ are available,
we use a discrete optimal coupling $\gamma_N$ to define an optimal control
problem. With a $\Gamma$-convergence argument, we prove that its solutions
correspond to flows that approximate the optimal transport map $T$. Finally,
taking advantage of the Pontryagin Maximum Principle, we propose an iterative
numerical scheme for the resolution of the optimal control problem, resulting
in an algorithm for the practical computation of the approximated optimal
transport map.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Computing high-dimensional optimal transport by flow neural networks [22.320632565424745]
This work develops a flow-based model that transports from $P$ to an arbitrary $Q$ where both distributions are only accessible via finite samples.
We propose to learn the dynamic optimal transport between $P$ and $Q$ by training a flow neural network.
The trained optimal transport flow subsequently allows for performing many downstream tasks, including infinitesimal density estimation (DRE) and distribution in the latent space for generative models.
arXiv Detail & Related papers (2023-05-19T17:48:21Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Optimal transport map estimation in general function spaces [17.323588442718926]
We study the problem of estimating a function $T$ given independent samples from a distribution $P$ and from the pushforward distribution $T_sharp P$.
This setting is motivated by applications in the sciences, where $T$ represents the evolution of a physical system over time.
We present a unified methodology for obtaining rates of estimation of optimal transport maps in general function spaces.
arXiv Detail & Related papers (2022-12-07T15:42:11Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Supervised Training of Conditional Monge Maps [107.78770597815242]
Optimal transport (OT) theory describes general principles to define and select, among many possible choices, the most efficient way to map a probability measure onto another.
We introduce CondOT, a multi-task approach to estimate a family of OT maps conditioned on a context variable.
We demonstrate the ability of CondOT to infer the effect of an arbitrary combination of genetic or therapeutic perturbations on single cells.
arXiv Detail & Related papers (2022-06-28T19:34:44Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - On Multimarginal Partial Optimal Transport: Equivalent Forms and
Computational Complexity [11.280177531118206]
We study the multi-marginal partial optimal transport (POT) problem between $m$ discrete (unbalanced) measures with at most $n$ supports.
We first prove that we can obtain two equivalence forms of the multimarginal POT problem in terms of the multimarginal optimal transport problem via novel extensions of cost tensor.
We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $tildemathcalO(m3(n+1)m/ var
arXiv Detail & Related papers (2021-08-18T06:46:59Z) - Adversarial Optimal Transport Through The Convolution Of Kernels With
Evolving Measures [3.1735221946062313]
A novel algorithm is proposed to solve the sample-based optimal transport problem.
The representation of the test function as the Monte Carlo simulation of a distribution makes the algorithm robust to dimensionality.
arXiv Detail & Related papers (2020-06-07T19:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.