Dropout is NOT All You Need to Prevent Gradient Leakage
- URL: http://arxiv.org/abs/2208.06163v1
- Date: Fri, 12 Aug 2022 08:29:44 GMT
- Title: Dropout is NOT All You Need to Prevent Gradient Leakage
- Authors: Daniel Scheliga and Patrick M\"ader and Marco Seeland
- Abstract summary: We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
- Score: 0.6021787236982659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient inversion attacks on federated learning systems reconstruct client
training data from exchanged gradient information. To defend against such
attacks, a variety of defense mechanisms were proposed. However, they usually
lead to an unacceptable trade-off between privacy and model utility. Recent
observations suggest that dropout could mitigate gradient leakage and improve
model utility if added to neural networks. Unfortunately, this phenomenon has
not been systematically researched yet. In this work, we thoroughly analyze the
effect of dropout on iterative gradient inversion attacks. We find that state
of the art attacks are not able to reconstruct the client data due to the
stochasticity induced by dropout during model training. Nonetheless, we argue
that dropout does not offer reliable protection if the dropout induced
stochasticity is adequately modeled during attack optimization. Consequently,
we propose a novel Dropout Inversion Attack (DIA) that jointly optimizes for
client data and dropout masks to approximate the stochastic client model. We
conduct an extensive systematic evaluation of our attack on four seminal model
architectures and three image classification datasets of increasing complexity.
We find that our proposed attack bypasses the protection seemingly induced by
dropout and reconstructs client data with high fidelity. Our work demonstrates
that privacy inducing changes to model architectures alone cannot be assumed to
reliably protect from gradient leakage and therefore should be combined with
complementary defense mechanisms.
Related papers
- Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks [2.1301560294088318]
Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
arXiv Detail & Related papers (2023-09-08T16:23:25Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Refiner: Data Refining against Gradient Leakage Attacks in Federated
Learning [28.76786159247595]
gradient leakage attacks exploit clients' uploaded gradients to reconstruct their sensitive data.
In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches.
We design Refiner that jointly optimize two metrics for privacy protection and performance maintenance.
arXiv Detail & Related papers (2022-12-05T05:36:15Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Beyond Gradients: Exploiting Adversarial Priors in Model Inversion
Attacks [7.49320945341034]
Collaborative machine learning settings can be susceptible to adversarial interference and attacks.
One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model to extract representations.
We propose a novel model inversion framework that builds on the foundations of gradient-based model inversion attacks.
arXiv Detail & Related papers (2022-03-01T14:22:29Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.