PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage
- URL: http://arxiv.org/abs/2108.04725v1
- Date: Tue, 10 Aug 2021 14:43:17 GMT
- Title: PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage
- Authors: Daniel Scheliga and Patrick M\"ader and Marco Seeland
- Abstract summary: Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
- Score: 0.8029049649310213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative training of neural networks leverages distributed data by
exchanging gradient information between different clients. Although training
data entirely resides with the clients, recent work shows that training data
can be reconstructed from such exchanged gradient information. To enhance
privacy, gradient perturbation techniques have been proposed. However, they
come at the cost of reduced model performance, increased convergence time, or
increased data demand. In this paper, we introduce PRECODE, a PRivacy EnhanCing
mODulE that can be used as generic extension for arbitrary model architectures.
We propose a simple yet effective realization of PRECODE using variational
modeling. The stochastic sampling induced by variational modeling effectively
prevents privacy leakage from gradients and in turn preserves privacy of data
owners. We evaluate PRECODE using state of the art gradient inversion attacks
on two different model architectures trained on three datasets. In contrast to
commonly used defense mechanisms, we find that our proposed modification
consistently reduces the attack success rate to 0% while having almost no
negative impact on model training and final performance. As a result, PRECODE
reveals a promising path towards privacy enhancing model extensions.
Related papers
- PATE-TripleGAN: Privacy-Preserving Image Synthesis with Gaussian Differential Privacy [4.586288671392977]
We present a privacy-preserving training framework called PATE-TripleGAN.
It incorporates a classifier to pre-classify unlabeled data to reduce dependence on labeled data.
PATE-TripleGAN can generate a higher quality labeled image dataset while ensuring privacy of the training data.
arXiv Detail & Related papers (2024-04-19T09:22:20Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - DPSUR: Accelerating Differentially Private Stochastic Gradient Descent
Using Selective Update and Release [29.765896801370612]
This paper proposes Differentially Private training framework based on Selective Updates and Release.
The main challenges lie in two aspects -- privacy concerns, and gradient selection strategy for model update.
Experiments conducted on MNIST, FMNIST, CIFAR-10, and IMDB datasets show that DPSUR significantly outperforms previous works in terms of convergence speed.
arXiv Detail & Related papers (2023-11-23T15:19:30Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks [2.1301560294088318]
Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
arXiv Detail & Related papers (2023-09-08T16:23:25Z) - Dropout is NOT All You Need to Prevent Gradient Leakage [0.6021787236982659]
We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
arXiv Detail & Related papers (2022-08-12T08:29:44Z) - Combining Variational Modeling with Partial Gradient Perturbation to
Prevent Deep Gradient Leakage [0.6021787236982659]
gradient inversion attacks are an ubiquitous threat in collaborative learning of neural networks.
Recent work proposed a PRivacy EnhanCing mODulE (PRECODE) based on PPPal modeling as extension for arbitrary model architectures.
In this work, we investigate the effect of PRECODE on gradient inversion attacks to reveal its underlying working principle.
We show that our approach requires less gradient perturbation to effectively preserve privacy without harming model performance.
arXiv Detail & Related papers (2022-08-09T13:23:29Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.