Generative modeling of time-dependent densities via optimal transport
and projection pursuit
- URL: http://arxiv.org/abs/2304.09663v2
- Date: Thu, 12 Oct 2023 15:52:37 GMT
- Title: Generative modeling of time-dependent densities via optimal transport
and projection pursuit
- Authors: Jonah Botvinick-Greenhouse, Yunan Yang, Romit Maulik
- Abstract summary: We propose a cheap alternative to popular deep learning algorithms for temporal modeling.
Our method is highly competitive compared with state-of-the-art solvers.
- Score: 3.069335774032178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the computational difficulties incurred by popular deep learning
algorithms for the generative modeling of temporal densities, we propose a
cheap alternative which requires minimal hyperparameter tuning and scales
favorably to high dimensional problems. In particular, we use a
projection-based optimal transport solver [Meng et al., 2019] to join
successive samples and subsequently use transport splines [Chewi et al., 2020]
to interpolate the evolving density. When the sampling frequency is
sufficiently high, the optimal maps are close to the identity and are thus
computationally efficient to compute. Moreover, the training process is highly
parallelizable as all optimal maps are independent and can thus be learned
simultaneously. Finally, the approach is based solely on numerical linear
algebra rather than minimizing a nonconvex objective function, allowing us to
easily analyze and control the algorithm. We present several numerical
experiments on both synthetic and real-world datasets to demonstrate the
efficiency of our method. In particular, these experiments show that the
proposed approach is highly competitive compared with state-of-the-art
normalizing flows conditioned on time across a wide range of dimensionalities.
Related papers
- Optimal Transportation by Orthogonal Coupling Dynamics [0.0]
We propose a novel framework to address the Monge-Kantorovich problem based on a projection type gradient descent scheme.
The micro-dynamics is built on the notion of the conditional expectation, where the connection with the opinion dynamics is explored.
We demonstrate that the devised dynamics recovers random maps with favourable computational performance.
arXiv Detail & Related papers (2024-10-10T15:53:48Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Accelerating Diffusion Sampling with Optimized Time Steps [69.21208434350567]
Diffusion probabilistic models (DPMs) have shown remarkable performance in high-resolution image synthesis.
Their sampling efficiency is still to be desired due to the typically large number of sampling steps.
Recent advancements in high-order numerical ODE solvers for DPMs have enabled the generation of high-quality images with much fewer sampling steps.
arXiv Detail & Related papers (2024-02-27T10:13:30Z) - Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference [1.740133468405535]
We present two neural network approaches that approximate the solutions of static and conditional optimal transport (COT) problems.
We demonstrate both algorithms, comparing them with competing state-the-art approaches, using benchmark datasets and simulation-based inverse problems.
arXiv Detail & Related papers (2023-10-25T20:20:09Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Versatile Single-Loop Method for Gradient Estimator: First and Second
Order Optimality, and its Application to Federated Learning [45.78238792836363]
We present a single-loop algorithm named SLEDGE (Single-Loop-E Gradient Estimator) for periodic convergence.
Unlike existing methods, SLEDGE has the advantage of versatility; (ii) second-order optimal, (ii) in the PL region, and (iii) smaller complexity under less of data.
arXiv Detail & Related papers (2022-09-01T11:05:26Z) - An adjoint-free algorithm for conditional nonlinear optimal perturbations (CNOPs) via sampling [5.758073912084367]
We propose a sampling algorithm based on state-of-the-art statistical machine learning techniques to obtain conditional nonlinear optimal perturbations (CNOPs)
The sampling approach directly reduces the gradient to the objective function value (zeroth-order information)
We demonstrate the CNOPs obtained with their spatial patterns, objective values, quantifying computation times, and nonlinear error growth.
arXiv Detail & Related papers (2022-08-01T16:07:22Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Efficient Discretizations of Optimal Transport [16.996068297291057]
We propose an algorithm for calculating discretizations with a given number of points for marginal distributions.
We prove bounds for our approximation and demonstrate performance on a wide range of problems.
arXiv Detail & Related papers (2021-02-16T04:31:52Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.