Efficient Online Learning in Interacting Particle Systems
- URL: http://arxiv.org/abs/2602.20875v1
- Date: Tue, 24 Feb 2026 13:19:07 GMT
- Title: Efficient Online Learning in Interacting Particle Systems
- Authors: Louis Sharrock, Nikolas Kantas, Grigorios A. Pavliotis,
- Abstract summary: We introduce a new method for online parameter estimation in interacting particle systems.<n>We rigorously establish convergence of our method to the stationary points of the log-likelihood of the interacting particle system.<n>Our numerical results corroborate our theoretical results, and also suggest that our estimator is effective even in cases where the assumptions required for our theoretical analysis do not hold.
- Score: 6.894787079804484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new method for online parameter estimation in stochastic interacting particle systems, based on continuous observation of a small number of particles from the system. Our method recursively updates the model parameters using a stochastic approximation of the gradient of the asymptotic log likelihood, which is computed using the continuous stream of observations. Under suitable assumptions, we rigorously establish convergence of our method to the stationary points of the asymptotic log-likelihood of the interacting particle system. We consider asymptotics both in the limit as the time horizon $t\rightarrow\infty$, for a fixed and finite number of particles, and in the joint limit as the number of particles $N\rightarrow\infty$ and the time horizon $t\rightarrow\infty$. Under additional assumptions on the asymptotic log-likelihood, we also establish an $\mathrm{L}^2$ convergence rate and a central limit theorem. Finally, we present several numerical examples of practical interest, including a model for systemic risk, a model of interacting FitzHugh--Nagumo neurons, and a Cucker--Smale flocking model. Our numerical results corroborate our theoretical results, and also suggest that our estimator is effective even in cases where the assumptions required for our theoretical analysis do not hold.
Related papers
- Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Convergence analysis of controlled particle systems arising in deep learning: from finite to infinite sample size [1.4325734372991794]
We study the limiting behavior of the associated sampled optimal control problems as the sample size grows to infinity.<n>The limiting objects can be identified with suitable functions defined on the Wasserstein space of Borel probability measures.
arXiv Detail & Related papers (2024-04-08T04:22:55Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood Estimation [2.365116842280503]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.<n>In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.<n>Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.<n>in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - A blob method method for inhomogeneous diffusion with applications to
multi-agent control and sampling [0.6562256987706128]
We develop a deterministic particle method for the weighted porous medium equation (WPME) and prove its convergence on bounded time intervals.
Our method has natural applications to multi-agent coverage algorithms and sampling probability measures.
arXiv Detail & Related papers (2022-02-25T19:49:05Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - Learning interaction kernels in stochastic systems of interacting
particles from multiple trajectories [13.3638879601361]
We consider systems of interacting particles or agents, with dynamics determined by an interaction kernel.
We introduce a nonparametric inference approach to this inverse problem, based on a regularized maximum likelihood estimator.
We show that a coercivity condition enables us to control the condition number of this problem and prove the consistency of our estimator.
arXiv Detail & Related papers (2020-07-30T01:28:06Z) - Quantitative Propagation of Chaos for SGD in Wide Neural Networks [39.35545193410871]
In this paper, we investigate the limiting behavior of a continuous-time counterpart of the Gradient Descent (SGD)
We show 'propagation of chaos' for the particle system defined by this continuous-time dynamics under different scenarios.
We identify two under which different mean-field limits are obtained, one of them corresponding to an implicitly regularized version of the minimization problem at hand.
arXiv Detail & Related papers (2020-07-13T12:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.