Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel
- URL: http://arxiv.org/abs/2402.08412v1
- Date: Tue, 13 Feb 2024 12:29:38 GMT
- Title: Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel
- Authors: Quanjun Lang, Xiong Wang, Fei Lu and Mauro Maggioni
- Abstract summary: We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
- Score: 8.535430501710712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling multi-agent systems on networks is a fundamental challenge in a wide
variety of disciplines. We jointly infer the weight matrix of the network and
the interaction kernel, which determine respectively which agents interact with
which others and the rules of such interactions from data consisting of
multiple trajectories. The estimator we propose leads naturally to a non-convex
optimization problem, and we investigate two approaches for its solution: one
is based on the alternating least squares (ALS) algorithm; another is based on
a new algorithm named operator regression with alternating least squares
(ORALS). Both algorithms are scalable to large ensembles of data trajectories.
We establish coercivity conditions guaranteeing identifiability and
well-posedness. The ALS algorithm appears statistically efficient and robust
even in the small data regime but lacks performance and convergence guarantees.
The ORALS estimator is consistent and asymptotically normal under a coercivity
condition. We conduct several numerical experiments ranging from Kuramoto
particle systems on networks to opinion dynamics in leader-follower models.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Physics-Informed Generator-Encoder Adversarial Networks with Latent
Space Matching for Stochastic Differential Equations [14.999611448900822]
We propose a new class of physics-informed neural networks to address the challenges posed by forward, inverse, and mixed problems in differential equations.
Our model consists of two key components: the generator and the encoder, both updated alternately by gradient descent.
In contrast to previous approaches, we employ an indirect matching that operates within the lower-dimensional latent feature space.
arXiv Detail & Related papers (2023-11-03T04:29:49Z) - Continuous Time Analysis of Dynamic Matching in Heterogeneous Networks [0.0]
We introduce a novel approach to modeling dynamic matching by establishing ordinary differential equation (ODE) models.
We study two algorithms, which prioritize the matching of compatible hard-to-match agents over easy-to-match agents in heterogeneous networks.
Our results show the trade-off between the conflicting goals of matching agents quickly and optimally, offering insights into the design of real-world dynamic matching systems.
arXiv Detail & Related papers (2023-02-20T04:45:13Z) - Finding Nontrivial Minimum Fixed Points in Discrete Dynamical Systems [29.7237944669855]
We formulate a novel optimization problem of finding a nontrivial fixed point of the system with the minimum number of affected nodes.
To cope with this computational intractability, we identify several special cases for which the problem can be solved efficiently.
For solving the problem on larger networks, we propose a general framework along with greedy selection methods.
arXiv Detail & Related papers (2023-01-06T14:46:01Z) - Verification of Neural-Network Control Systems by Integrating Taylor
Models and Zonotopes [0.0]
We study the verification problem for closed-loop dynamical systems with neural-network controllers (NNCS)
We present an algorithm to chain approaches based on Taylor models and zonotopes, yielding a precise reachability algorithm for NNCS.
arXiv Detail & Related papers (2021-12-16T20:46:39Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.