On the coercivity condition in the learning of interacting particle
systems
- URL: http://arxiv.org/abs/2011.10480v2
- Date: Thu, 21 Oct 2021 05:51:24 GMT
- Title: On the coercivity condition in the learning of interacting particle
systems
- Authors: Zhongyang Li and Fei Lu
- Abstract summary: Coercivity condition is equivalent to the strictly positive definiteness of an integral kernel arising in the learning.
We show that for a class of interaction functions such that the system is ergodic, the integral kernel is strictly positive definite, and hence the coercivity condition holds true.
- Score: 7.089219223012485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the learning of systems of interacting particles or agents, coercivity
condition ensures identifiability of the interaction functions, providing the
foundation of learning by nonparametric regression. The coercivity condition is
equivalent to the strictly positive definiteness of an integral kernel arising
in the learning. We show that for a class of interaction functions such that
the system is ergodic, the integral kernel is strictly positive definite, and
hence the coercivity condition holds true.
Related papers
- Evolution of multi-qubit correlations driven by mutual interactions [49.1574468325115]
We analyze the evolution of the correlation tensor elements of quantum systems composed of $frac12$-spins.<n>We show how a strong external field can play a stabilizing factor with respect to certain correlation characteristics.
arXiv Detail & Related papers (2025-07-01T11:45:08Z) - On the continuity and smoothness of the value function in reinforcement learning and optimal control [1.534667887016089]
We show that the value function is always H"older continuous under relatively weak assumptions on the underlying system.
We also show that non-differentiable value functions can be made differentiable by slightly "disturbing" the system.
arXiv Detail & Related papers (2024-03-21T14:39:28Z) - Learning Collective Behaviors from Observation [13.278752237440022]
We present a comprehensive examination of learning methodologies employed for the structural identification of dynamical systems.
Our approach not only ensures theoretical convergence guarantees but also exhibits computational efficiency when handling high-dimensional observational data.
arXiv Detail & Related papers (2023-11-01T22:02:08Z) - Denoising and Extension of Response Functions in the Time Domain [48.52478746418526]
Response functions of quantum systems describe the response of a system to an external perturbation.
In equilibrium and steady-state systems, they correspond to a positive spectral function in the frequency domain.
arXiv Detail & Related papers (2023-09-05T20:26:03Z) - Entanglement statistics of randomly interacting spins [62.997667081978825]
Entanglement depends on the underlying topology of the interaction among the qubits.
We investigate the entanglement in the ground state of systems comprising two and three qubits with random interactions.
arXiv Detail & Related papers (2023-07-18T23:58:32Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Learning Interaction Variables and Kernels from Observations of
Agent-Based Systems [14.240266845551488]
We propose a learning technique that, given observations of states and velocities along trajectories of agents, yields both the variables upon which the interaction kernel depends and the interaction kernel itself.
This yields an effective dimension reduction which avoids the curse of dimensionality from the high-dimensional observation data.
We demonstrate the learning capability of our method to a variety of first-order interacting systems.
arXiv Detail & Related papers (2022-08-04T16:31:01Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Data-driven discovery of interacting particle systems using Gaussian
processes [3.0938904602244346]
We study the data-driven discovery of distance-based interaction laws in second-order interacting particle systems.
We propose a learning approach that models the latent interaction kernel functions as Gaussian processes.
Numerical results on systems that exhibit different collective behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
arXiv Detail & Related papers (2021-06-04T22:00:53Z) - Learning Theory for Inferring Interaction Kernels in Second-Order
Interacting Agent Systems [17.623937769189364]
We develop a complete learning theory which establishes strong consistency and optimal nonparametric min-max rates of convergence for the estimators.
The numerical algorithm presented to build the estimators is parallelizable, performs well on high-dimensional problems, and is demonstrated on complex dynamical systems.
arXiv Detail & Related papers (2020-10-08T02:07:53Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.