Using Linearized Optimal Transport to Predict the Evolution of Stochastic Particle Systems
- URL: http://arxiv.org/abs/2408.01857v3
- Date: Sat, 15 Feb 2025 20:54:14 GMT
- Title: Using Linearized Optimal Transport to Predict the Evolution of Stochastic Particle Systems
- Authors: Nicholas Karris, Evangelos A. Nikitopoulos, Ioannis G. Kevrekidis, Seungjoon Lee, Alexander Cloninger,
- Abstract summary: We use linearized optimal transport theory to prove that the measure-valued algorithm is first-order accurate when the measure evolves smoothly''<n>We specifically demonstrate the efficacy of our approach by showing that our algorithm vastly reduces the number of micro-scale steps needed to correctly approximate long-term behavior.
- Score: 42.49693678817552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop an Euler-type method to predict the evolution of a time-dependent probability measure without explicitly learning an operator that governs its evolution. We use linearized optimal transport theory to prove that the measure-valued analog of Euler's method is first-order accurate when the measure evolves ``smoothly.'' In applications of interest, however, the measure is an empirical distribution of a system of stochastic particles whose behavior is only accessible through an agent-based micro-scale simulation. In such cases, this empirical measure does not evolve smoothly because the individual particles move chaotically on short time scales. However, we can still perform our Euler-type method, and when the particles' collective distribution approximates a measure that \emph{does} evolve smoothly, we observe that the algorithm still accurately predicts this collective behavior over relatively large Euler steps. We specifically demonstrate the efficacy of our approach by showing that our algorithm vastly reduces the number of micro-scale steps needed to correctly approximate long-term behavior in two illustrative examples, reflected Brownian motion and a model of bacterial chemotaxis.
Related papers
- Inferring Parameter Distributions in Heterogeneous Motile Particle Ensembles: A Likelihood Approach for Second Order Langevin Models [0.8274836883472768]
Inference methods are required to understand and predict the motion patterns from time discrete trajectory data provided by experiments.
We propose a new method to approximate the likelihood for non-linear second order Langevin models.
We thereby pave the way for the systematic, data-driven inference of dynamical models for actively driven entities.
arXiv Detail & Related papers (2024-11-13T15:27:02Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Bayesian Circular Regression with von Mises Quasi-Processes [57.88921637944379]
In this work we explore a family of expressive and interpretable distributions over circle-valued random functions.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Gibbs sampling.
We present experiments applying this model to the prediction of wind directions and the percentage of the running gait cycle as a function of joint angles.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization [38.67914746910537]
We prove an $mathcalO(t-1)$ lower bound rate for the OT map, using the similarity between Laguerre cells estimation and density support estimation.
To nearly achieve the desired fast rate, we design an entropic regularization scheme decreasing with the number of samples.
arXiv Detail & Related papers (2024-05-23T11:46:03Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Normalizing flows as approximations of optimal transport maps via linear-control neural ODEs [49.1574468325115]
"Normalizing Flows" is related to the task of constructing invertible transport maps between probability measures by means of deep neural networks.
We consider the problem of recovering the $Wamma$-optimal transport map $T$ between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
arXiv Detail & Related papers (2023-11-02T17:17:03Z) - Projected Langevin dynamics and a gradient flow for entropic optimal
transport [0.8057006406834466]
We introduce analogous diffusion dynamics that sample from an entropy-regularized optimal transport.
By studying the induced Wasserstein geometry of the submanifold $Pi(mu,nu)$, we argue that the SDE can be viewed as a Wasserstein gradient flow on this space of couplings.
arXiv Detail & Related papers (2023-09-15T17:55:56Z) - Adaptive Student's t-distribution with method of moments moving
estimator for nonstationary time series [0.8702432681310399]
We will focus on recently proposed philosophy of moving estimator.
$F_t=sum_taut (1-eta)t-tau ln(rho_theta (x_tau))$ moving log-likelihood, evolving in time.
Student's t-distribution, popular especially in economical applications, here applied to log-returns of DJIA companies.
arXiv Detail & Related papers (2023-04-06T13:37:27Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood Estimation [2.365116842280503]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.
In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.
Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.
in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Robust computation of optimal transport by $\beta$-potential
regularization [79.24513412588745]
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions.
We propose regularizing OT with the beta-potential term associated with the so-called $beta$-divergence.
We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers.
arXiv Detail & Related papers (2022-12-26T18:37:28Z) - Scalable and adaptive variational Bayes methods for Hawkes processes [4.580983642743026]
We propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes.
Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting.
arXiv Detail & Related papers (2022-12-01T05:35:32Z) - Measurement-based deterministic imaginary time evolution [2.2653383133675966]
We introduce a method to perform imaginary time evolution in a controllable quantum system using measurements and conditional unitary operations.
We show that the algorithm converges only below a specified energy threshold and the complexity is estimated for some specific problem instances.
arXiv Detail & Related papers (2022-02-18T09:42:46Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z) - The Variational Method of Moments [65.91730154730905]
conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables.
Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem.
We provide algorithms for valid statistical inference based on the same kind of variational reformulations.
arXiv Detail & Related papers (2020-12-17T07:21:06Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Adversarial Optimal Transport Through The Convolution Of Kernels With
Evolving Measures [3.1735221946062313]
A novel algorithm is proposed to solve the sample-based optimal transport problem.
The representation of the test function as the Monte Carlo simulation of a distribution makes the algorithm robust to dimensionality.
arXiv Detail & Related papers (2020-06-07T19:42:50Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.