Joint Learning of Network Topology and Opinion Dynamics Based on Bandit
Algorithms
- URL: http://arxiv.org/abs/2306.15695v1
- Date: Sun, 25 Jun 2023 21:53:13 GMT
- Title: Joint Learning of Network Topology and Opinion Dynamics Based on Bandit
Algorithms
- Authors: Yu Xing, Xudong Sun, Karl H. Johansson
- Abstract summary: We study joint learning of network topology and a mixed opinion dynamics.
We propose a learning algorithm based on multi-armed bandit algorithms to address the problem.
- Score: 1.6912877206492036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study joint learning of network topology and a mixed opinion dynamics, in
which agents may have different update rules. Such a model captures the
diversity of real individual interactions. We propose a learning algorithm
based on multi-armed bandit algorithms to address the problem. The goal of the
algorithm is to find each agent's update rule from several candidate rules and
to learn the underlying network. At each iteration, the algorithm assumes that
each agent has one of the updated rules and then modifies network estimates to
reduce validation error. Numerical experiments show that the proposed algorithm
improves initial estimates of the network and update rules, decreases
prediction error, and performs better than other methods such as sparse linear
regression and Gaussian process regression.
Related papers
- Deep Equilibrium Algorithmic Reasoning [18.651333116786084]
We study neurally solving algorithms from a different perspective.
Since the algorithm's solution is often an equilibrium, it is possible to find the solution directly by solving an equilibrium equation.
Our approach requires no information on the ground-truth number of steps of the algorithm, both during train and test time.
arXiv Detail & Related papers (2024-10-19T10:40:55Z) - Discrete Neural Algorithmic Reasoning [18.497863598167257]
We propose to force neural reasoners to maintain the execution trajectory as a combination of finite predefined states.
trained with supervision on the algorithm's state transitions, such models are able to perfectly align with the original algorithm.
arXiv Detail & Related papers (2024-02-18T16:03:04Z) - Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel [8.535430501710712]
We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
arXiv Detail & Related papers (2024-02-13T12:29:38Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Non-Parametric Neuro-Adaptive Coordination of Multi-Agent Systems [29.22096249070293]
We develop a learning-based algorithm for the distributed formation control of networked multi-agent systems.
The proposed algorithm integrates neural network-based learning with adaptive control in a two-step procedure.
We provide formal theoretical guarantees on the achievement of the formation task.
arXiv Detail & Related papers (2021-10-11T10:04:08Z) - Breaking the Deadly Triad with a Target Network [80.82586530205776]
The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously.
We provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.
arXiv Detail & Related papers (2021-01-21T21:50:10Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.