Bayesian Federated Learning over Wireless Networks
- URL: http://arxiv.org/abs/2012.15486v1
- Date: Thu, 31 Dec 2020 07:32:44 GMT
- Title: Bayesian Federated Learning over Wireless Networks
- Authors: Seunghoon Lee, Chanho Park, Song-Nam Hong, Yonina C. Eldar, Namyoon
Lee
- Abstract summary: Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
- Score: 87.37301441859925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a privacy-preserving and distributed training method
using heterogeneous data sets stored at local devices. Federated learning over
wireless networks requires aggregating locally computed gradients at a server
where the mobile devices send statistically distinct gradient information over
heterogenous communication links. This paper proposes a Bayesian federated
learning (BFL) algorithm to aggregate the heterogeneous quantized gradient
information optimally in the sense of minimizing the mean-squared error (MSE).
The idea of BFL is to aggregate the one-bit quantized local gradients at the
server by jointly exploiting i) the prior distributions of the local gradients,
ii) the gradient quantizer function, and iii) channel distributions.
Implementing BFL requires high communication and computational costs as the
number of mobile devices increases. To address this challenge, we also present
an efficient modified BFL algorithm called scalable-BFL (SBFL). In SBFL, we
assume a simplified distribution on the local gradient. Each mobile device
sends its one-bit quantized local gradient together with two scalar parameters
representing this distribution. The server then aggregates the noisy and faded
quantized gradients to minimize the MSE. We provide a convergence analysis of
SBFL for a class of non-convex loss functions. Our analysis elucidates how the
parameters of communication channels and the gradient priors affect
convergence. From simulations, we demonstrate that SBFL considerably
outperforms the conventional sign stochastic gradient descent algorithm when
training and testing neural networks using MNIST data sets over heterogeneous
wireless networks.
Related papers
- Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Joint Power Control and Data Size Selection for Over-the-Air Computation
Aided Federated Learning [19.930700426682982]
Federated learning (FL) has emerged as an appealing machine learning approach to deal with massive raw data generated at multiple mobile devices.
We propose to jointly optimize the signal amplification factors at the base station and the mobile devices.
Our proposed method can greatly reduce the mean-squared error (MSE) and can help to improve the performance of FL.
arXiv Detail & Related papers (2023-08-17T16:01:02Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.