Inverting Gradients -- How easy is it to break privacy in federated
learning?
- URL: http://arxiv.org/abs/2003.14053v2
- Date: Fri, 11 Sep 2020 11:41:10 GMT
- Title: Inverting Gradients -- How easy is it to break privacy in federated
learning?
- Authors: Jonas Geiping, Hartmut Bauermeister, Hannah Dr\"oge, Michael Moeller
- Abstract summary: federated learning is designed to collaboratively train a neural network on a server.
Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data.
Previous attacks have provided a false sense of security, by succeeding only in contrived settings.
We show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients.
- Score: 13.632998588216523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The idea of federated learning is to collaboratively train a neural network
on a server. Each user receives the current weights of the network and in turns
sends parameter updates (gradients) based on local data. This protocol has been
designed not only to train neural networks data-efficiently, but also to
provide privacy benefits for users, as their input data remains on device and
only parameter gradients are shared. But how secure is sharing parameter
gradients? Previous attacks have provided a false sense of security, by
succeeding only in contrived settings - even for a single image. However, by
exploiting a magnitude-invariant loss along with optimization strategies based
on adversarial attacks, we show that is is actually possible to faithfully
reconstruct images at high resolution from the knowledge of their parameter
gradients, and demonstrate that such a break of privacy is possible even for
trained deep networks. We analyze the effects of architecture as well as
parameters on the difficulty of reconstructing an input image and prove that
any input to a fully connected layer can be reconstructed analytically
independent of the remaining architecture. Finally we discuss settings
encountered in practice and show that even averaging gradients over several
iterations or several images does not protect the user's privacy in federated
learning applications in computer vision.
Related papers
- Federated Learning Nodes Can Reconstruct Peers' Image Data [27.92271597111756]
Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data.
Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server.
We show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk.
arXiv Detail & Related papers (2024-10-07T00:18:35Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning [6.721419921063687]
We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets.
Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.
arXiv Detail & Related papers (2024-03-13T06:34:49Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Gradient Obfuscation Gives a False Sense of Security in Federated
Learning [41.36621813381792]
We present a new data reconstruction attack framework targeting the image classification task in federated learning.
Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression.
arXiv Detail & Related papers (2022-06-08T13:01:09Z) - Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies [0.31498833540989407]
Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
arXiv Detail & Related papers (2022-04-26T12:08:28Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - See through Gradients: Image Batch Recovery via GradInversion [103.26922860665039]
We introduce GradInversion, using which input images from a larger batch can also be recovered for large networks such as ResNets (50 layers)
We show that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
arXiv Detail & Related papers (2021-04-15T16:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.