Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage
- URL: http://arxiv.org/abs/2203.15696v1
- Date: Tue, 29 Mar 2022 15:59:59 GMT
- Title: Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage
- Authors: Zhuohang Li, Jiaxin Zhang, Luyang Liu, Jian Liu
- Abstract summary: Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
- Score: 9.83989883339971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) framework brings privacy benefits to distributed
learning systems by allowing multiple clients to participate in a learning task
under the coordination of a central server without exchanging their private
data. However, recent studies have revealed that private information can still
be leaked through shared gradient information. To further protect user's
privacy, several defense mechanisms have been proposed to prevent privacy
leakage via gradient information degradation methods, such as using additive
noise or gradient compression before sharing it with the server. In this work,
we validate that the private training data can still be leaked under certain
defense settings with a new type of leakage, i.e., Generative Gradient Leakage
(GGL). Unlike existing methods that only rely on gradient information to
reconstruct data, our method leverages the latent space of generative
adversarial networks (GAN) learned from public image datasets as a prior to
compensate for the informational loss during gradient degradation. To address
the nonlinearity caused by the gradient operator and the GAN model, we explore
various gradient-free optimization methods (e.g., evolution strategies and
Bayesian optimization) and empirically show their superiority in reconstructing
high-quality images from gradients compared to gradient-based optimizers. We
hope the proposed method can serve as a tool for empirically measuring the
amount of privacy leakage to facilitate the design of more robust defense
mechanisms.
Related papers
- Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Revisiting Gradient Pruning: A Dual Realization for Defending against
Gradient Attacks [16.037627212754487]
Collaborative learning (CL) is a distributed learning framework that allows users to jointly train a model by sharing their gradient updates only.
gradient inversion attacks (GIAs), which recover users' training data from shared gradients, impose severe privacy threats to CL.
We propose a novel defense method, Dual Gradient Pruning (DGP), which can improve communication efficiency while preserving the utility and privacy of CL.
arXiv Detail & Related papers (2024-01-30T02:18:30Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Gradient Leakage Defense with Key-Lock Module for Federated Learning [14.411227689702997]
Federated Learning (FL) is a widely adopted privacy-preserving machine learning approach.
Recent findings reveal that privacy may be compromised and sensitive information potentially recovered from shared gradients.
We propose a new gradient leakage defense technique that secures arbitrary model architectures using a private key-lock module.
arXiv Detail & Related papers (2023-05-06T16:47:52Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Gradient Obfuscation Gives a False Sense of Security in Federated
Learning [41.36621813381792]
We present a new data reconstruction attack framework targeting the image classification task in federated learning.
Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression.
arXiv Detail & Related papers (2022-06-08T13:01:09Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Gradient Inversion with Generative Image Prior [37.03737843861339]
Federated Learning (FL) is a distributed learning framework in which the local data never leaves clients devices to preserve privacy.
We show that data privacy can be easily breached by exploiting a generative model pretrained on the data distribution.
We experimentally show that the prior in a form of generative model is learnable from iterative interactions in FL.
arXiv Detail & Related papers (2021-10-28T09:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.