Learning Theory for Inferring Interaction Kernels in Second-Order
Interacting Agent Systems
- URL: http://arxiv.org/abs/2010.03729v1
- Date: Thu, 8 Oct 2020 02:07:53 GMT
- Title: Learning Theory for Inferring Interaction Kernels in Second-Order
Interacting Agent Systems
- Authors: Jason Miller, Sui Tang, Ming Zhong, Mauro Maggioni
- Abstract summary: We develop a complete learning theory which establishes strong consistency and optimal nonparametric min-max rates of convergence for the estimators.
The numerical algorithm presented to build the estimators is parallelizable, performs well on high-dimensional problems, and is demonstrated on complex dynamical systems.
- Score: 17.623937769189364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the complex interactions of systems of particles or agents is a
fundamental scientific and mathematical problem that is studied in diverse
fields, ranging from physics and biology, to economics and machine learning. In
this work, we describe a very general second-order, heterogeneous,
multivariable, interacting agent model, with an environment, that encompasses a
wide variety of known systems. We describe an inference framework that uses
nonparametric regression and approximation theory based techniques to
efficiently derive estimators of the interaction kernels which drive these
dynamical systems. We develop a complete learning theory which establishes
strong consistency and optimal nonparametric min-max rates of convergence for
the estimators, as well as provably accurate predicted trajectories. The
estimators exploit the structure of the equations in order to overcome the
curse of dimensionality and we describe a fundamental coercivity condition on
the inverse problem which ensures that the kernels can be learned and relates
to the minimal singular value of the learning matrix. The numerical algorithm
presented to build the estimators is parallelizable, performs well on
high-dimensional problems, and is demonstrated on complex dynamical systems.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Collective Relational Inference for learning heterogeneous interactions [8.215734914005845]
We propose a novel probabilistic method for relational inference, which possesses two distinctive characteristics compared to existing methods.
We evaluate the proposed methodology across several benchmark datasets and demonstrate that it outperforms existing methods in accurately inferring interaction types.
Overall the proposed model is data-efficient and generalizable to large systems when trained on smaller ones.
arXiv Detail & Related papers (2023-04-30T19:45:04Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Learning Interaction Variables and Kernels from Observations of
Agent-Based Systems [14.240266845551488]
We propose a learning technique that, given observations of states and velocities along trajectories of agents, yields both the variables upon which the interaction kernel depends and the interaction kernel itself.
This yields an effective dimension reduction which avoids the curse of dimensionality from the high-dimensional observation data.
We demonstrate the learning capability of our method to a variety of first-order interacting systems.
arXiv Detail & Related papers (2022-08-04T16:31:01Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Data-driven discovery of interacting particle systems using Gaussian
processes [3.0938904602244346]
We study the data-driven discovery of distance-based interaction laws in second-order interacting particle systems.
We propose a learning approach that models the latent interaction kernel functions as Gaussian processes.
Numerical results on systems that exhibit different collective behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
arXiv Detail & Related papers (2021-06-04T22:00:53Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Modern Koopman Theory for Dynamical Systems [2.5889588665122725]
We provide an overview of modern Koopman operator theory, describing recent theoretical and algorithmic developments.
We also discuss key advances and challenges in the rapidly growing field of machine learning.
arXiv Detail & Related papers (2021-02-24T06:18:16Z) - Learning Interaction Kernels for Agent Systems on Riemannian Manifolds [9.588842746998486]
We generalize the theory and algorithms in [1] introduced in the Euclidean setting.
We show that our estimators converge at a rate that is independent of the dimension of the manifold.
We demonstrate highly accurate performance of the learning algorithm on three classical first order interacting systems.
arXiv Detail & Related papers (2021-01-30T22:15:50Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.