Learning interaction kernels in stochastic systems of interacting
particles from multiple trajectories
- URL: http://arxiv.org/abs/2007.15174v1
- Date: Thu, 30 Jul 2020 01:28:06 GMT
- Title: Learning interaction kernels in stochastic systems of interacting
particles from multiple trajectories
- Authors: Fei Lu, Mauro Maggioni, Sui Tang
- Abstract summary: We consider systems of interacting particles or agents, with dynamics determined by an interaction kernel.
We introduce a nonparametric inference approach to this inverse problem, based on a regularized maximum likelihood estimator.
We show that a coercivity condition enables us to control the condition number of this problem and prove the consistency of our estimator.
- Score: 13.3638879601361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider stochastic systems of interacting particles or agents, with
dynamics determined by an interaction kernel which only depends on pairwise
distances. We study the problem of inferring this interaction kernel from
observations of the positions of the particles, in either continuous or
discrete time, along multiple independent trajectories. We introduce a
nonparametric inference approach to this inverse problem, based on a
regularized maximum likelihood estimator constrained to suitable hypothesis
spaces adaptive to data. We show that a coercivity condition enables us to
control the condition number of this problem and prove the consistency of our
estimator, and that in fact it converges at a near-optimal learning rate, equal
to the min-max rate of $1$-dimensional non-parametric regression. In
particular, this rate is independent of the dimension of the state space, which
is typically very high. We also analyze the discretization errors in the case
of discrete-time observations, showing that it is of order $1/2$ in terms of
the time gaps between observations. This term, when large, dominates the
sampling error and the approximation error, preventing convergence of the
estimator. Finally, we exhibit an efficient parallel algorithm to construct the
estimator from data, and we demonstrate the effectiveness of our algorithm with
numerical tests on prototype systems including stochastic opinion dynamics and
a Lennard-Jones model.
Related papers
- Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Kinetic Interacting Particle Langevin Monte Carlo [0.0]
This paper introduces and analyses interacting underdamped Langevin algorithms, for statistical inference in latent variable models.
We propose a diffusion process that evolves jointly in the space of parameters and latent variables.
We provide two explicit discretisations of this diffusion as practical algorithms to estimate parameters of statistical models.
arXiv Detail & Related papers (2024-07-08T09:52:46Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood
Estimation [2.53740603524637]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.
In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.
Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.
in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Data-driven discovery of interacting particle systems using Gaussian
processes [3.0938904602244346]
We study the data-driven discovery of distance-based interaction laws in second-order interacting particle systems.
We propose a learning approach that models the latent interaction kernel functions as Gaussian processes.
Numerical results on systems that exhibit different collective behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
arXiv Detail & Related papers (2021-06-04T22:00:53Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Learning interaction kernels in mean-field equations of 1st-order
systems of interacting particles [1.776746672434207]
We introduce a nonparametric algorithm to learn interaction kernels of mean-field equations for 1st-order systems of interacting particles.
By at least squares with regularization, the algorithm learns the kernel on data-adaptive hypothesis spaces efficiently.
arXiv Detail & Related papers (2020-10-29T15:37:17Z) - Generative Ensemble Regression: Learning Particle Dynamics from
Observations of Ensembles with Physics-Informed Deep Generative Models [27.623119767592385]
We propose a new method for inferring the governing ordinary differential equations (SODEs) by observing particle ensembles at discrete and sparse time instants.
Particle coordinates at a single time instant, possibly noisy or truncated, are recorded in each snapshot but are unpaired across the snapshots.
By training a physics-informed generative model that generates "fake" sample paths, we aim to fit the observed particle ensemble distributions with a curve in the probability measure space.
arXiv Detail & Related papers (2020-08-05T03:06:40Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.