Asynchronous Bayesian Learning over a Network
- URL: http://arxiv.org/abs/2211.08603v1
- Date: Wed, 16 Nov 2022 01:21:36 GMT
- Title: Asynchronous Bayesian Learning over a Network
- Authors: Kinjal Bhar, He Bai, Jemin George, Carl Busart
- Abstract summary: We present a practical asynchronous data fusion model for networked agents to perform distributed Bayesian learning without sharing raw data.
Our algorithm uses a gossip-based approach where pairs of randomly selected agents employ unadjusted Langevin dynamics for parameter sampling.
We introduce an event-triggered mechanism to further reduce communication between gossiping agents.
- Score: 18.448653247778143
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a practical asynchronous data fusion model for networked agents to
perform distributed Bayesian learning without sharing raw data. Our algorithm
uses a gossip-based approach where pairs of randomly selected agents employ
unadjusted Langevin dynamics for parameter sampling. We also introduce an
event-triggered mechanism to further reduce communication between gossiping
agents. These mechanisms drastically reduce communication overhead and help
avoid bottlenecks commonly experienced with distributed algorithms. In
addition, the reduced link utilization by the algorithm is expected to increase
resiliency to occasional link failure. We establish mathematical guarantees for
our algorithm and demonstrate its effectiveness via numerical experiments.
Related papers
- Convergence Visualizer of Decentralized Federated Distillation with
Reduced Communication Costs [3.2098126952615442]
Federated learning (FL) achieves collaborative learning without the need for data sharing, thus preventing privacy leakage.
This study solves two unresolved challenges of CMFD: (1) communication cost reduction and (2) visualization of model convergence.
arXiv Detail & Related papers (2023-12-19T07:23:49Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - Scalable Hierarchical Over-the-Air Federated Learning [3.8798345704175534]
This work introduces a new two-level learning method designed to handle both interference and device data heterogeneity.
We present a comprehensive mathematical approach to derive the convergence bound for the proposed algorithm.
Despite the interference and data heterogeneity, the proposed algorithm achieves high learning accuracy for a variety of parameters.
arXiv Detail & Related papers (2022-11-29T12:46:37Z) - Communication-Efficient Distributionally Robust Decentralized Learning [23.612400109629544]
Decentralized learning algorithms empower interconnected edge devices to share data and computational resources.
We propose a single decentralized loop descent/ascent algorithm (ADGDA) to solve the underlying minimax optimization problem.
arXiv Detail & Related papers (2022-05-31T09:00:37Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.