Using Linearized Optimal Transport to Predict the Evolution of Stochastic Particle Systems
- URL: http://arxiv.org/abs/2408.01857v1
- Date: Sat, 3 Aug 2024 20:00:36 GMT
- Title: Using Linearized Optimal Transport to Predict the Evolution of Stochastic Particle Systems
- Authors: Nicholas Karris, Evangelos A. Nikitopoulos, Ioannis Kevrekidis, Seungjoon Lee, Alexander Cloninger,
- Abstract summary: We develop an algorithm to approximate the time evolution of a probability measure without explicitly learning an operator that governs the evolution.
A particular application of interest is discrete measures $mu_tN$ that arise from particle systems.
- Score: 42.49693678817552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop an algorithm to approximate the time evolution of a probability measure without explicitly learning an operator that governs the evolution. A particular application of interest is discrete measures $\mu_t^N$ that arise from particle systems. In many such situations, the individual particles move chaotically on short time scales, making it difficult to learn the dynamics of a governing operator, but the bulk distribution $\mu_t^N$ approximates an absolutely continuous measure $\mu_t$ that evolves ``smoothly.'' If $\mu_t$ is known on some time interval, then linearized optimal transport theory provides an Euler-like scheme for approximating the evolution of $\mu_t$ using its ``tangent vector field'' (represented as a time-dependent vector field on $\mathbb R^d$), which can be computed as a limit of optimal transport maps. We propose an analog of this Euler approximation to predict the evolution of the discrete measure $\mu_t^N$ (without knowing $\mu_t$). To approximate the analogous tangent vector field, we use a finite difference over a time step that sits between the two time scales of the system -- long enough for the large-$N$ evolution ($\mu_t$) to emerge but short enough to satisfactorily approximate the derivative object used in the Euler scheme. By allowing the limiting behavior to emerge, the optimal transport maps closely approximate the vector field describing the bulk distribution's smooth evolution instead of the individual particles' more chaotic movements. We demonstrate the efficacy of this approach with two illustrative examples, Gaussian diffusion and a cell chemotaxis model, and show that our method succeeds in predicting the bulk behavior over relatively large steps.
Related papers
- Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization [38.67914746910537]
We prove an $mathcalO(t-1)$ lower bound rate for the OT map, using the similarity between Laguerre cells estimation and density support estimation.
To nearly achieve the desired fast rate, we design an entropic regularization scheme decreasing with the number of samples.
arXiv Detail & Related papers (2024-05-23T11:46:03Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Normalizing flows as approximations of optimal transport maps via linear-control neural ODEs [49.1574468325115]
"Normalizing Flows" is related to the task of constructing invertible transport maps between probability measures by means of deep neural networks.
We consider the problem of recovering the $Wamma$-optimal transport map $T$ between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
arXiv Detail & Related papers (2023-11-02T17:17:03Z) - Projected Langevin dynamics and a gradient flow for entropic optimal
transport [0.8057006406834466]
We introduce analogous diffusion dynamics that sample from an entropy-regularized optimal transport.
By studying the induced Wasserstein geometry of the submanifold $Pi(mu,nu)$, we argue that the SDE can be viewed as a Wasserstein gradient flow on this space of couplings.
arXiv Detail & Related papers (2023-09-15T17:55:56Z) - Adaptive Student's t-distribution with method of moments moving
estimator for nonstationary time series [0.8702432681310399]
We will focus on recently proposed philosophy of moving estimator.
$F_t=sum_taut (1-eta)t-tau ln(rho_theta (x_tau))$ moving log-likelihood, evolving in time.
Student's t-distribution, popular especially in economical applications, here applied to log-returns of DJIA companies.
arXiv Detail & Related papers (2023-04-06T13:37:27Z) - Robust computation of optimal transport by $\beta$-potential
regularization [79.24513412588745]
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions.
We propose regularizing OT with the beta-potential term associated with the so-called $beta$-divergence.
We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers.
arXiv Detail & Related papers (2022-12-26T18:37:28Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z) - Adversarial Optimal Transport Through The Convolution Of Kernels With
Evolving Measures [3.1735221946062313]
A novel algorithm is proposed to solve the sample-based optimal transport problem.
The representation of the test function as the Monte Carlo simulation of a distribution makes the algorithm robust to dimensionality.
arXiv Detail & Related papers (2020-06-07T19:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.