Communication-Efficient Federated Learning over Wireless Channels via Gradient Sketching
- URL: http://arxiv.org/abs/2410.23424v1
- Date: Wed, 30 Oct 2024 20:01:08 GMT
- Title: Communication-Efficient Federated Learning over Wireless Channels via Gradient Sketching
- Authors: Vineet Sunil Gattani, Junshan Zhang, Gautam Dasarathy,
- Abstract summary: We propose Federated Proximal Sketching (FPS), tailored towards band-limited wireless channels.
FPS uses a count sketch data structure to address the bandwidth bottleneck and enable efficient compression.
We demonstrate the stability, accuracy, and efficiency of FPS in comparison to state-of-the-art methods on both synthetic and real-world datasets.
- Score: 23.523969065599193
- License:
- Abstract: Large-scale federated learning (FL) over wireless multiple access channels (MACs) has emerged as a crucial learning paradigm with a wide range of applications. However, its widespread adoption is hindered by several major challenges, including limited bandwidth shared by many edge devices, noisy and erroneous wireless communications, and heterogeneous datasets with different distributions across edge devices. To overcome these fundamental challenges, we propose Federated Proximal Sketching (FPS), tailored towards band-limited wireless channels and handling data heterogeneity across edge devices. FPS uses a count sketch data structure to address the bandwidth bottleneck and enable efficient compression while maintaining accurate estimation of significant coordinates. Additionally, we modify the loss function in FPS such that it is equipped to deal with varying degrees of data heterogeneity. We establish the convergence of the FPS algorithm under mild technical conditions and characterize how the bias induced due to factors like data heterogeneity and noisy wireless channels play a role in the overall result. We complement the proposed theoretical framework with numerical experiments that demonstrate the stability, accuracy, and efficiency of FPS in comparison to state-of-the-art methods on both synthetic and real-world datasets. Overall, our results show that FPS is a promising solution to tackling the above challenges of FL over wireless MACs.
Related papers
- Online-Score-Aided Federated Learning: Taming the Resource Constraints in Wireless Networks [26.74283774805648]
We propose a new FL algorithm called OSAFL, specifically designed to learn tasks relevant to wireless applications.
Our extensive simulation results on two different tasks -- each with three different datasets -- with four popular ML models validate the effectiveness of OSAFL.
arXiv Detail & Related papers (2024-08-12T01:27:06Z) - Hierarchical Over-the-Air Federated Learning with Awareness of
Interference and Data Heterogeneity [3.8798345704175534]
We introduce a scalable transmission scheme that efficiently uses a single wireless resource through over-the-air computation.
We show that despite the interference and the data heterogeneity, the proposed scheme achieves high learning accuracy and can significantly outperform the conventional hierarchical algorithm.
arXiv Detail & Related papers (2024-01-02T21:43:01Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - Blind Federated Edge Learning [93.29571175702735]
We study federated edge learning (FEEL), where wireless edge devices, each with its own dataset, learn a global model.
We propose an analog over-the-air' aggregation scheme, in which the devices transmit their local updates in an uncoded fashion.
arXiv Detail & Related papers (2020-10-19T16:22:28Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.