Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
- URL: http://arxiv.org/abs/2406.00999v2
- Date: Fri, 04 Oct 2024 04:00:46 GMT
- Title: Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
- Authors: Weijun Li, Qiongkai Xu, Mark Dras,
- Abstract summary: gradients from a single Transformer layer, or even a single linear component with 0.54% parameters, are susceptible to training data leakage.
Applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.
- Score: 11.6665056456826
- License:
- Abstract: Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that such reconstructions are possible using gradients from all parameters in the entire models. However, we hypothesize that most of the involved modules, or even their sub-modules, are at risk of training data leakage, and we validate such vulnerabilities in various intermediate layers of language models. Our extensive experiments reveal that gradients from a single Transformer layer, or even a single linear component with 0.54% parameters, are susceptible to training data leakage. Additionally, we show that applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - R-CONV: An Analytical Approach for Efficient Data Reconstruction via Convolutional Gradients [40.209183669098735]
This paper introduces an advanced data leakage method to efficiently exploit convolutional layers' gradients.
To the best of our knowledge, this is the first analytical approach that successfully reconstructs convolutional layer inputs directly from the gradients.
arXiv Detail & Related papers (2024-06-06T16:28:04Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Gradient Leakage Defense with Key-Lock Module for Federated Learning [14.411227689702997]
Federated Learning (FL) is a widely adopted privacy-preserving machine learning approach.
Recent findings reveal that privacy may be compromised and sensitive information potentially recovered from shared gradients.
We propose a new gradient leakage defense technique that secures arbitrary model architectures using a private key-lock module.
arXiv Detail & Related papers (2023-05-06T16:47:52Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Understanding Training-Data Leakage from Gradients in Neural Networks
for Image Classification [11.272188531829016]
In many applications, we need to protect the training data from being leaked due to IP or privacy concerns.
Recent works have demonstrated that it is possible to reconstruct the training data from gradients for an image-classification model when its architecture is known.
We formulate the problem of training data reconstruction as solving an optimisation problem iteratively for each layer.
We are able to attribute the potential leakage of the training data in a deep network to its architecture.
arXiv Detail & Related papers (2021-11-19T12:14:43Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z) - Quantifying Information Leakage from Gradients [8.175697239083474]
Sharing deep neural networks' gradients instead of training data could facilitate data privacy in collaborative learning.
In practice however, gradients can disclose both private latent attributes and original data.
Mathematical metrics are needed to quantify both original and latent information leakages from gradients computed over the training data.
arXiv Detail & Related papers (2021-05-28T15:47:44Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.