R-GAP: Recursive Gradient Attack on Privacy
- URL: http://arxiv.org/abs/2010.07733v3
- Date: Tue, 16 Mar 2021 11:16:25 GMT
- Title: R-GAP: Recursive Gradient Attack on Privacy
- Authors: Junyi Zhu and Matthew Blaschko
- Abstract summary: Federated learning is a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data.
We provide a closed-form recursion procedure to recover data from gradients in deep neural networks.
We also propose a Rank Analysis method to estimate the risk of gradient attacks inherent in certain network architectures.
- Score: 5.687523225718642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning frameworks have been regarded as a promising approach to
break the dilemma between demands on privacy and the promise of learning from
large collections of distributed data. Many such frameworks only ask
collaborators to share their local update of a common model, i.e. gradients
with respect to locally stored data, instead of exposing their raw data to
other collaborators. However, recent optimization-based gradient attacks show
that raw data can often be accurately recovered from gradients. It has been
shown that minimizing the Euclidean distance between true gradients and those
calculated from estimated data is often effective in fully recovering private
data. However, there is a fundamental lack of theoretical understanding of how
and when gradients can lead to unique recovery of original data. Our research
fills this gap by providing a closed-form recursive procedure to recover data
from gradients in deep neural networks. We name it Recursive Gradient Attack on
Privacy (R-GAP). Experimental results demonstrate that R-GAP works as well as
or even better than optimization-based approaches at a fraction of the
computation under certain conditions. Additionally, we propose a Rank Analysis
method, which can be used to estimate the risk of gradient attacks inherent in
certain network architectures, regardless of whether an optimization-based or
closed-form-recursive attack is used. Experimental results demonstrate the
utility of the rank analysis towards improving the network's security. Source
code is available for download from https://github.com/JunyiZhu-AI/R-GAP.
Related papers
- QBI: Quantile-Based Bias Initialization for Efficient Private Data Reconstruction in Federated Learning [0.5497663232622965]
Federated learning enables the training of machine learning models on distributed data without compromising user privacy.
Recent research has shown that the central entity can perfectly reconstruct private data from shared model updates.
arXiv Detail & Related papers (2024-06-26T20:19:32Z) - R-CONV: An Analytical Approach for Efficient Data Reconstruction via Convolutional Gradients [40.209183669098735]
This paper introduces an advanced data leakage method to efficiently exploit convolutional layers' gradients.
To the best of our knowledge, this is the first analytical approach that successfully reconstructs convolutional layer inputs directly from the gradients.
arXiv Detail & Related papers (2024-06-06T16:28:04Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Sample Complexity of Preference-Based Nonparametric Off-Policy
Evaluation with Deep Networks [58.469818546042696]
We study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.
By appropriately selecting the size of a ReLU network, we show that one can leverage any low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2023-10-16T16:27:06Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Temporal Gradient Inversion Attacks with Robust Optimization [18.166835997248658]
Federated Learning (FL) has emerged as a promising approach for collaborative model training without sharing private data.
Gradient Inversion Attacks (GIAs) have been proposed to reconstruct the private data retained by local clients from the exchanged gradients.
While recovering private data, the data dimensions and the model complexity increase, which thwart data reconstruction by GIAs.
We propose TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients.
arXiv Detail & Related papers (2023-06-13T16:21:34Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies [0.31498833540989407]
Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
arXiv Detail & Related papers (2022-04-26T12:08:28Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Understanding Gradient Clipping in Private SGD: A Geometric Perspective [68.61254575987013]
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
Many learning systems now incorporate differential privacy by training their models with (differentially) private SGD.
A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold.
arXiv Detail & Related papers (2020-06-27T19:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.