Temporal Gradient Inversion Attacks with Robust Optimization
- URL: http://arxiv.org/abs/2306.07883v1
- Date: Tue, 13 Jun 2023 16:21:34 GMT
- Title: Temporal Gradient Inversion Attacks with Robust Optimization
- Authors: Bowen Li, Hanlin Gu, Ruoxin Chen, Jie Li, Chentao Wu, Na Ruan, Xueming
Si, Lixin Fan
- Abstract summary: Federated Learning (FL) has emerged as a promising approach for collaborative model training without sharing private data.
Gradient Inversion Attacks (GIAs) have been proposed to reconstruct the private data retained by local clients from the exchanged gradients.
While recovering private data, the data dimensions and the model complexity increase, which thwart data reconstruction by GIAs.
We propose TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients.
- Score: 18.166835997248658
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising approach for collaborative
model training without sharing private data. However, privacy concerns
regarding information exchanged during FL have received significant research
attention. Gradient Inversion Attacks (GIAs) have been proposed to reconstruct
the private data retained by local clients from the exchanged gradients. While
recovering private data, the data dimensions and the model complexity increase,
which thwart data reconstruction by GIAs. Existing methods adopt prior
knowledge about private data to overcome those challenges. In this paper, we
first observe that GIAs with gradients from a single iteration fail to
reconstruct private data due to insufficient dimensions of leaked gradients,
complex model architectures, and invalid gradient information. We investigate a
Temporal Gradient Inversion Attack with a Robust Optimization framework, called
TGIAs-RO, which recovers private data without any prior knowledge by leveraging
multiple temporal gradients. To eliminate the negative impacts of outliers,
e.g., invalid gradients for collaborative optimization, robust statistics are
proposed. Theoretical guarantees on the recovery performance and robustness of
TGIAs-RO against invalid gradients are also provided. Extensive empirical
results on MNIST, CIFAR10, ImageNet and Reuters 21578 datasets show that the
proposed TGIAs-RO with 10 temporal gradients improves reconstruction
performance compared to state-of-the-art methods, even for large batch sizes
(up to 128), complex models like ResNet18, and large datasets like ImageNet
(224*224 pixels). Furthermore, the proposed attack method inspires further
exploration of privacy-preserving methods in the context of FL.
Related papers
- GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks? [7.203272199091038]
Gradient Inversion Attack using Practical Image Prior (GI-PIP) is proposed under a revised threat model.
GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer data, while GAN-based methods consume significant more data to synthesize images.
Experimental results show that GI-PIP achieves a 16.12 dB PSNR recovery using only 3.8% data of ImageNet, while GAN-based methods necessitate over 70%.
arXiv Detail & Related papers (2024-01-22T08:20:47Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z) - R-GAP: Recursive Gradient Attack on Privacy [5.687523225718642]
Federated learning is a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data.
We provide a closed-form recursion procedure to recover data from gradients in deep neural networks.
We also propose a Rank Analysis method to estimate the risk of gradient attacks inherent in certain network architectures.
arXiv Detail & Related papers (2020-10-15T13:22:40Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.