On the representation and learning of monotone triangular transport maps
- URL: http://arxiv.org/abs/2009.10303v3
- Date: Sat, 24 Feb 2024 23:13:34 GMT
- Title: On the representation and learning of monotone triangular transport maps
- Authors: Ricardo Baptista, Youssef Marzouk, Olivier Zahm
- Abstract summary: We present a framework for representing monotone triangular maps via smooth functions.
We show how this framework can be applied to joint and conditional density estimation.
This framework can be applied to likelihood-free inference models, with stable performance across a range of sample sizes.
- Score: 1.0128808054306186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transportation of measure provides a versatile approach for modeling complex
probability distributions, with applications in density estimation, Bayesian
inference, generative modeling, and beyond. Monotone triangular transport
maps$\unicode{x2014}$approximations of the Knothe$\unicode{x2013}$Rosenblatt
(KR) rearrangement$\unicode{x2014}$are a canonical choice for these tasks. Yet
the representation and parameterization of such maps have a significant impact
on their generality and expressiveness, and on properties of the optimization
problem that arises in learning a map from data (e.g., via maximum likelihood
estimation). We present a general framework for representing monotone
triangular maps via invertible transformations of smooth functions. We
establish conditions on the transformation such that the associated
infinite-dimensional minimization problem has no spurious local minima, i.e.,
all local minima are global minima; and we show for target distributions
satisfying certain tail conditions that the unique global minimizer corresponds
to the KR map. Given a sample from the target, we then propose an adaptive
algorithm that estimates a sparse semi-parametric approximation of the
underlying KR map. We demonstrate how this framework can be applied to joint
and conditional density estimation, likelihood-free inference, and structure
learning of directed graphical models, with stable generalization performance
across a range of sample sizes.
Related papers
- Learning Gaussian Representation for Eye Fixation Prediction [54.88001757991433]
Existing eye fixation prediction methods perform the mapping from input images to the corresponding dense fixation maps generated from raw fixation points.
We introduce Gaussian Representation for eye fixation modeling.
We design our framework upon some lightweight backbones to achieve real-time fixation prediction.
arXiv Detail & Related papers (2024-03-21T20:28:22Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - A generative flow for conditional sampling via optimal transport [1.0486135378491266]
This work proposes a non-parametric generative model that iteratively maps reference samples to the target.
The model uses block-triangular transport maps, whose components are shown to characterize conditionals of the target distribution.
These maps arise from solving an optimal transport problem with a weighted $L2$ cost function, thereby extending the data-driven approach in [Trigila and Tabak, 2016] for conditional sampling.
arXiv Detail & Related papers (2023-07-09T05:36:26Z) - Semi-supervised Learning of Pushforwards For Domain Translation &
Adaptation [3.800498098285222]
Given two probability densities on related data spaces, we seek a map pushing one density to the other.
For maps to have utility in a broad application space, the map must be available to apply on out-of-sample data points.
We introduce a novel pushforward map learning algorithm that utilizes normalizing flows to parameterize the map.
arXiv Detail & Related papers (2023-04-18T00:35:32Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Minimal Neural Atlas: Parameterizing Complex Surfaces with Minimal
Charts and Distortion [71.52576837870166]
We present Minimal Neural Atlas, a novel atlas-based explicit neural surface representation.
At its core is a fully learnable parametric domain, given by an implicit probabilistic occupancy field defined on an open square of the parametric space.
Our reconstructions are more accurate in terms of the overall geometry, due to the separation of concerns on topology and geometry.
arXiv Detail & Related papers (2022-07-29T16:55:06Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Scalable Computation of Monge Maps with General Costs [12.273462158073302]
Monge map refers to the optimal transport map between two probability distributions.
We present a scalable algorithm for computing the Monge map between two probability distributions.
arXiv Detail & Related papers (2021-06-07T17:23:24Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Conditional Sampling with Monotone GANs: from Generative Models to
Likelihood-Free Inference [4.913013713982677]
We present a novel framework for conditional sampling of probability measures, using block triangular transport maps.
We develop the theoretical foundations of block triangular transport in a Banach space setting.
We then introduce a computational approach, called monotone generative adversarial networks, to learn suitable block triangular maps.
arXiv Detail & Related papers (2020-06-11T19:15:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.