Low-Latency Federated Learning over Wireless Channels with Differential
Privacy
- URL: http://arxiv.org/abs/2106.13039v1
- Date: Sun, 20 Jun 2021 13:51:18 GMT
- Title: Low-Latency Federated Learning over Wireless Channels with Differential
Privacy
- Authors: Kang Wei, Jun Li, Chuan Ma, Ming Ding, Cailian Chen, Shi Jin, Zhu Han
and H. Vincent Poor
- Abstract summary: In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
- Score: 142.5983499872664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), model training is distributed over clients and
local models are aggregated by a central server. The performance of uploaded
models in such situations can vary widely due to imbalanced data distributions,
potential demands on privacy protections, and quality of transmissions. In this
paper, we aim to minimize FL training delay over wireless channels, constrained
by overall training performance as well as each client's differential privacy
(DP) requirement. We solve this problem in the framework of multi-agent
multi-armed bandit (MAMAB) to deal with the situation where there are multiple
clients confornting different unknown transmission environments, e.g., channel
fading and interferences. Specifically, we first transform the long-term
constraints on both training performance and each client's DP into a virtual
queue based on the Lyapunov drift technique. Then, we convert the MAMAB to a
max-min bipartite matching problem at each communication round, by estimating
rewards with the upper confidence bound (UCB) approach. More importantly, we
propose two efficient solutions to this matching problem, i.e., modified
Hungarian algorithm and greedy matching with a better alternative (GMBA), in
which the first one can achieve the optimal solution with a high complexity
while the second one approaches a better trade-off by enabling a verified
low-complexity with little performance loss. In addition, we develop an upper
bound on the expected regret of this MAMAB based FL framework, which shows a
linear growth over the logarithm of communication rounds, justifying its
theoretical feasibility. Extensive experimental results are conducted to
validate the effectiveness of our proposed algorithms, and the impacts of
various parameters on the FL performance over wireless edge networks are also
discussed.
Related papers
- pFedWN: A Personalized Federated Learning Framework for D2D Wireless Networks with Heterogeneous Data [1.9188272016043582]
Traditional Federated Learning approaches often struggle with data heterogeneity across clients.
PFL emerges as a solution to the challenges posed by non-independent and identically distributed (non-IID) and unbalanced data across clients.
We formulate a joint optimization problem that incorporates the underlying device-to-device (D2D) wireless channel conditions into a server-free PFL approach.
arXiv Detail & Related papers (2025-01-16T20:16:49Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - One-Shot Federated Learning with Bayesian Pseudocoresets [19.53527340816458]
We show that distributed function-space inference is tightly related to learning Bayesian pseudocoresets.
We show that this approach achieves prediction performance competitive to state-of-the-art while showing a striking reduction in communication cost of up to two orders of magnitude.
arXiv Detail & Related papers (2024-06-04T10:14:39Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.