Robust Asynchronous and Network-Independent Cooperative Learning
- URL: http://arxiv.org/abs/2010.09993v1
- Date: Tue, 20 Oct 2020 03:54:20 GMT
- Title: Robust Asynchronous and Network-Independent Cooperative Learning
- Authors: Eduardo Mojica-Nava and David Yanguas-Rojas and C\'esar A. Uribe
- Abstract summary: We consider the model of cooperative learning via distributed non-Bayesian learning, where a network of agents tries to jointly agree on a hypothesis.
We show that our proposed learning dynamics guarantee that all agents in the network will have an exponential decay of their beliefs on the wrong hypothesis.
- Score: 1.712689361909955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the model of cooperative learning via distributed non-Bayesian
learning, where a network of agents tries to jointly agree on a hypothesis that
best described a sequence of locally available observations. Building upon
recently proposed weak communication network models, we propose a robust
cooperative learning rule that allows asynchronous communications, message
delays, unpredictable message losses, and directed communication among nodes.
We show that our proposed learning dynamics guarantee that all agents in the
network will have an asymptotic exponential decay of their beliefs on the wrong
hypothesis, indicating that the beliefs of all agents will concentrate on the
optimal hypotheses. Numerical experiments provide evidence on a number of
network setups.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Modality Competition: What Makes Joint Training of Multi-modal Network
Fail in Deep Learning? (Provably) [75.38159612828362]
It has been observed that the best uni-modal network outperforms the jointly trained multi-modal network.
This work provides a theoretical explanation for the emergence of such performance gap in neural networks for the prevalent joint training framework.
arXiv Detail & Related papers (2022-03-23T06:21:53Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Scalable Average Consensus with Compressed Communications [0.8702432681310401]
We propose a new decentralized average consensus algorithm with compressed communication that scales linearly with the network size n.
We prove that the proposed method converges to the average of the initial values held locally by the agents of a network when agents are allowed to communicate with compressed messages.
arXiv Detail & Related papers (2021-09-14T22:26:06Z) - Neural Enhanced Belief Propagation for Cooperative Localization [6.787897491422112]
Location-aware networks will introduce innovative services and applications for modern convenience, applied ocean sciences, and public safety.
We establish a hybrid method for model-based and data-driven inference.
We consider a cooperative localization (CL) scenario where the mobile agents in a wireless network aim to localize themselves by performing pairwise observations with other agents and by exchanging location information.
arXiv Detail & Related papers (2021-05-27T01:42:54Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Distributed Inference with Sparse and Quantized Communication [7.155594644943642]
We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state.
We develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis.
We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis.
arXiv Detail & Related papers (2020-04-02T23:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.