An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning
- URL: http://arxiv.org/abs/2002.09843v5
- Date: Sun, 15 Aug 2021 16:48:08 GMT
- Title: An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning
- Authors: Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia,
Rongxing Lu
- Abstract summary: Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
- Score: 82.80836918594231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although federated learning improves privacy of training data by exchanging
local gradients or parameters rather than raw data, the adversary still can
leverage local gradients and parameters to obtain local training data by
launching reconstruction and membership inference attacks. To defend such
privacy attacks, many noises perturbation methods (like differential privacy or
CountSketch matrix) have been widely designed. However, the strong defence
ability and high learning accuracy of these schemes cannot be ensured at the
same time, which will impede the wide application of FL in practice (especially
for medical or financial institutions that require both high accuracy and
strong privacy guarantee). To overcome this issue, in this paper, we propose
\emph{an efficient model perturbation method for federated learning} to defend
reconstruction and membership inference attacks launched by curious clients. On
the one hand, similar to the differential privacy, our method also selects
random numbers as perturbed noises added to the global model parameters, and
thus it is very efficient and easy to be integrated in practice. Meanwhile, the
random selected noises are positive real numbers and the corresponding value
can be arbitrarily large, and thus the strong defence ability can be ensured.
On the other hand, unlike differential privacy or other perturbation methods
that cannot eliminate the added noises, our method allows the server to recover
the true gradients by eliminating the added noises. Therefore, our method does
not hinder learning accuracy at all.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Decorrelative Network Architecture for Robust Electrocardiogram
Classification [4.808817930937323]
It is not possible to train networks that are accurate in all scenarios.
Deep learning methods sample the model parameter space to estimate uncertainty.
These parameters are often subject to the same vulnerabilities, which can be exploited by adversarial attacks.
We propose a novel ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse complementary features.
arXiv Detail & Related papers (2022-07-19T02:36:36Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Gradient Leakage Attack Resilient Deep Learning [7.893378392969824]
gradient leakage attacks are considered one of the wickedest privacy threats in deep learning.
Deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee.
This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy.
arXiv Detail & Related papers (2021-12-25T03:33:02Z) - A Shuffling Framework for Local Differential Privacy [40.92785300658643]
ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity.
An alternative model, shuffle DP, prevents this by shuffling the noisy responses uniformly at random.
We show that systematic shuffling of the noisy responses can thwart specific inference attacks while retaining some meaningful data learnability.
arXiv Detail & Related papers (2021-06-11T20:36:23Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.