Learning interaction kernels in mean-field equations of 1st-order
systems of interacting particles
- URL: http://arxiv.org/abs/2010.15694v1
- Date: Thu, 29 Oct 2020 15:37:17 GMT
- Title: Learning interaction kernels in mean-field equations of 1st-order
systems of interacting particles
- Authors: Quanjun Lang, Fei Lu
- Abstract summary: We introduce a nonparametric algorithm to learn interaction kernels of mean-field equations for 1st-order systems of interacting particles.
By at least squares with regularization, the algorithm learns the kernel on data-adaptive hypothesis spaces efficiently.
- Score: 1.776746672434207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a nonparametric algorithm to learn interaction kernels of
mean-field equations for 1st-order systems of interacting particles. The data
consist of discrete space-time observations of the solution. By least squares
with regularization, the algorithm learns the kernel on data-adaptive
hypothesis spaces efficiently. A key ingredient is a probabilistic error
functional derived from the likelihood of the mean-field equation's diffusion
process. The estimator converges, in a reproducing kernel Hilbert space and an
L2 space under an identifiability condition, at a rate optimal in the sense
that it equals the numerical integrator's order. We demonstrate our algorithm
on three typical examples: the opinion dynamics with a piecewise linear kernel,
the granular media model with a quadratic kernel, and the aggregation-diffusion
with a repulsive-attractive kernel.
Related papers
- Inferring Kernel $ε$-Machines: Discovering Structure in Complex Systems [49.1574468325115]
We introduce causal diffusion components that encode the kernel causal-state estimates as a set of coordinates in a reduced dimension space.
We show how each component extracts predictive features from data and demonstrate their application on four examples.
arXiv Detail & Related papers (2024-10-01T21:14:06Z) - Non-separable Covariance Kernels for Spatiotemporal Gaussian Processes
based on a Hybrid Spectral Method and the Harmonic Oscillator [0.0]
We present a hybrid spectral approach for generating covariance kernels based on physical arguments.
We derive explicit relations for the covariance kernels in the three oscillator regimes (underdamping, critical damping, overdamping) and investigate their properties.
arXiv Detail & Related papers (2023-02-19T14:12:48Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Learning "best" kernels from data in Gaussian process regression. With
application to aerodynamics [0.4588028371034406]
We introduce algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.
A first class of algorithms is kernel flow, which was introduced in a context of classification in machine learning.
A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal.
arXiv Detail & Related papers (2022-06-03T07:50:54Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Data-driven discovery of interacting particle systems using Gaussian
processes [3.0938904602244346]
We study the data-driven discovery of distance-based interaction laws in second-order interacting particle systems.
We propose a learning approach that models the latent interaction kernel functions as Gaussian processes.
Numerical results on systems that exhibit different collective behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
arXiv Detail & Related papers (2021-06-04T22:00:53Z) - Learning interaction kernels in stochastic systems of interacting
particles from multiple trajectories [13.3638879601361]
We consider systems of interacting particles or agents, with dynamics determined by an interaction kernel.
We introduce a nonparametric inference approach to this inverse problem, based on a regularized maximum likelihood estimator.
We show that a coercivity condition enables us to control the condition number of this problem and prove the consistency of our estimator.
arXiv Detail & Related papers (2020-07-30T01:28:06Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.