Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning
- URL: http://arxiv.org/abs/2007.01790v2
- Date: Tue, 17 Nov 2020 09:17:13 GMT
- Title: Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning
- Authors: Anis Elgabli, Jihong Park, Chaouki Ben Issaid, Mehdi Bennis
- Abstract summary: Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
- Score: 56.94644428312295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wireless connectivity is instrumental in enabling scalable federated learning
(FL), yet wireless channels bring challenges for model training, in which
channel randomness perturbs each worker's model update while multiple workers'
updates incur significant interference under limited bandwidth. To address
these challenges, in this work we formulate a novel constrained optimization
problem, and propose an FL framework harnessing wireless channel perturbations
and interference for improving privacy, bandwidth-efficiency, and scalability.
The resultant algorithm is coined analog federated ADMM (A-FADMM) based on
analog transmissions and the alternating direction method of multipliers
(ADMM). In A-FADMM, all workers upload their model updates to the parameter
server (PS) using a single channel via analog transmissions, during which all
models are perturbed and aggregated over-the-air. This not only saves
communication bandwidth, but also hides each worker's exact model update
trajectory from any eavesdropper including the honest-but-curious PS, thereby
preserving data privacy against model inversion attacks. We formally prove the
convergence and privacy guarantees of A-FADMM for convex functions under
time-varying channels, and numerically show the effectiveness of A-FADMM under
noisy channels and stochastic non-convex functions, in terms of convergence
speed and scalability, as well as communication bandwidth and energy
efficiency.
Related papers
- Generating High Dimensional User-Specific Wireless Channels using Diffusion Models [28.270917362301972]
This paper introduces a novel method for generating synthetic wireless channel data using diffusion-based models.
We generate synthetic high fidelity channel samples using user positions as conditional inputs, creating larger augmented datasets to overcome measurement scarcity.
arXiv Detail & Related papers (2024-09-05T22:08:28Z) - Hierarchical Federated Learning in Wireless Networks: Pruning Tackles Bandwidth Scarcity and System Heterogeneity [32.321021292376315]
We propose a pruning-enabled hierarchical federated learning (PHFL) in heterogeneous networks (HetNets)
We first derive an upper bound of the convergence rate that clearly demonstrates the impact of the model pruning and wireless communications.
We validate the effectiveness of our proposed PHFL algorithm in terms of test accuracy, wall clock time, energy consumption and bandwidth requirement.
arXiv Detail & Related papers (2023-08-03T07:03:33Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.