Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
- URL: http://arxiv.org/abs/2112.00059v1
- Date: Tue, 30 Nov 2021 19:34:16 GMT
- Title: Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
- Authors: Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora
- Abstract summary: This paper evaluates existing attacks and defenses against gradient inversion attacks.
We show the trade-offs of privacy leakage and data utility of three proposed defense mechanisms.
Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss.
- Score: 43.993693910541275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient inversion attack (or input recovery from gradient) is an emerging
threat to the security and privacy preservation of Federated learning, whereby
malicious eavesdroppers or participants in the protocol can recover (partially)
the clients' private data. This paper evaluates existing attacks and defenses.
We find that some attacks make strong assumptions about the setup. Relaxing
such assumptions can substantially weaken these attacks. We then evaluate the
benefits of three proposed defense mechanisms against gradient inversion
attacks. We show the trade-offs of privacy leakage and data utility of these
defense methods, and find that combining them in an appropriate manner makes
the attack less effective, even under the original strong assumptions. We also
estimate the computation cost of end-to-end recovery of a single image under
each evaluated defense. Our findings suggest that the state-of-the-art attacks
can currently be defended against with minor data utility loss, as summarized
in a list of potential strategies. Our code is available at:
https://github.com/Princeton-SysML/GradAttack.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - OASIS: Offsetting Active Reconstruction Attacks in Federated Learning [14.644814818768172]
Federated Learning (FL) has garnered significant attention for its potential to protect user privacy.
Recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks.
We propose a defense mechanism based on image augmentation that effectively counteracts active reconstruction attacks.
arXiv Detail & Related papers (2023-11-23T00:05:17Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning [31.374376311614675]
Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
arXiv Detail & Related papers (2022-10-19T20:41:30Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Mitigating Gradient-based Adversarial Attacks via Denoising and
Compression [7.305019142196582]
Gradient-based adversarial attacks on deep neural networks pose a serious threat.
They can be deployed by adding imperceptible perturbations to the test data of any network.
Denoising and dimensionality reduction are two distinct methods that have been investigated to combat such attacks.
arXiv Detail & Related papers (2021-04-03T22:57:01Z) - Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
Review [40.36824357892676]
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.
According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide.
Countermeasures are categorized into four general classes: blind backdoor removal, offline backdoor inspection, online backdoor inspection, and post backdoor removal.
arXiv Detail & Related papers (2020-07-21T12:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.