Spectrum Breathing: Protecting Over-the-Air Federated Learning Against
Interference
- URL: http://arxiv.org/abs/2305.05933v1
- Date: Wed, 10 May 2023 07:05:43 GMT
- Title: Spectrum Breathing: Protecting Over-the-Air Federated Learning Against
Interference
- Authors: Zhanwei Wang, Kaibin Huang, and Yonina C. Eldar
- Abstract summary: Mobile networks can be compromised by interference from neighboring cells or jammers.
We propose Spectrum Breathing, which cascades-gradient pruning and spread spectrum to suppress interference without bandwidth expansion.
We show a performance tradeoff between gradient-pruning and interference-induced error as regulated by the breathing depth.
- Score: 101.9031141868695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a widely embraced paradigm for distilling
artificial intelligence from distributed mobile data. However, the deployment
of FL in mobile networks can be compromised by exposure to interference from
neighboring cells or jammers. Existing interference mitigation techniques
require multi-cell cooperation or at least interference channel state
information, which is expensive in practice. On the other hand, power control
that treats interference as noise may not be effective due to limited power
budgets, and also that this mechanism can trigger countermeasures by
interference sources. As a practical approach for protecting FL against
interference, we propose Spectrum Breathing, which cascades stochastic-gradient
pruning and spread spectrum to suppress interference without bandwidth
expansion. The cost is higher learning latency by exploiting the graceful
degradation of learning speed due to pruning. We synchronize the two operations
such that their levels are controlled by the same parameter, Breathing Depth.
To optimally control the parameter, we develop a martingale-based approach to
convergence analysis of Over-the-Air FL with spectrum breathing, termed
AirBreathing FL. We show a performance tradeoff between gradient-pruning and
interference-induced error as regulated by the breathing depth. Given receive
SIR and model size, the optimization of the tradeoff yields two schemes for
controlling the breathing depth that can be either fixed or adaptive to
channels and the learning process. As shown by experiments, in scenarios where
traditional Over-the-Air FL fails to converge in the presence of strong
interference, AirBreahing FL with either fixed or adaptive breathing depth can
ensure convergence where the adaptive scheme achieves close-to-ideal
performance.
Related papers
- DEeR: Deviation Eliminating and Noise Regulating for Privacy-preserving Federated Low-rank Adaptation [29.30782543513243]
We propose a privacy-preserving federated finetuning framework called underlineDeviation underlineEliminating and Noisunderlinee underlineRegulating (DEeR)
We show that DEeR shows better performance on public medical datasets in comparison with state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-16T18:11:52Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning [31.966999085992505]
Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
arXiv Detail & Related papers (2023-05-26T12:04:59Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z) - Federated Learning in the Sky: Joint Power Allocation and Scheduling
with UAV Swarms [98.78553146823829]
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks.
In this paper, a novel framework is proposed to implement distributed learning (FL) algorithms within a UAV swarm.
arXiv Detail & Related papers (2020-02-19T14:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.