Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations
- URL: http://arxiv.org/abs/2210.02235v1
- Date: Wed, 5 Oct 2022 13:13:35 GMT
- Title: Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations
- Authors: Jialing Liao, Zheng Chen, and Erik G. Larsson
- Abstract summary: We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
- Score: 57.20885629270732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider privacy aspects of wireless federated learning
(FL) with Over-the-Air (OtA) transmission of gradient updates from multiple
users/agents to an edge server. By exploiting the waveform superposition
property of multiple access channels, OtA FL enables the users to transmit
their updates simultaneously with linear processing techniques, which improves
resource efficiency. However, this setting is vulnerable to privacy leakage
since an adversary node can hear directly the uncoded message. Traditional
perturbation-based methods provide privacy protection while sacrificing the
training accuracy due to the reduced signal-to-noise ratio. In this work, we
aim at minimizing privacy leakage to the adversary and the degradation of model
accuracy at the edge server at the same time. More explicitly, spatially
correlated perturbations are added to the gradient vectors at the users before
transmission. Using the zero-sum property of the correlated perturbations, the
side effect of the added perturbation on the aggregated gradients at the edge
server can be minimized. In the meanwhile, the added perturbation will not be
canceled out at the adversary, which prevents privacy leakage. Theoretical
analysis of the perturbation covariance matrix, differential privacy, and model
convergence is provided, based on which an optimization problem is formulated
to jointly design the covariance matrix and the power scaling factor to balance
between privacy protection and convergence performance. Simulation results
validate the correlated perturbation approach can provide strong defense
ability while guaranteeing high learning accuracy.
Related papers
- Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - Spectrum Breathing: Protecting Over-the-Air Federated Learning Against Interference [73.63024765499719]
Mobile networks can be compromised by interference from neighboring cells or jammers.
We propose Spectrum Breathing, which cascades-gradient pruning and spread spectrum to suppress interference without bandwidth expansion.
We show a performance tradeoff between gradient-pruning and interference-induced error as regulated by the breathing depth.
arXiv Detail & Related papers (2023-05-10T07:05:43Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.