AGIC: Approximate Gradient Inversion Attack on Federated Learning
- URL: http://arxiv.org/abs/2204.13784v1
- Date: Thu, 28 Apr 2022 21:15:59 GMT
- Title: AGIC: Approximate Gradient Inversion Attack on Federated Learning
- Authors: Jin Xu, Chi Hong, Jiyue Huang, Lydia Y. Chen, J\'er\'emie Decouchant
- Abstract summary: Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model.
Recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single minibatch to reconstruct the private data used by clients during training.
We propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently and effectively reconstructs images from both model or gradient updates.
- Score: 7.289310150187218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a private-by-design distributed learning paradigm where
clients train local models on their own data before a central server aggregates
their local updates to compute a global model. Depending on the aggregation
method used, the local updates are either the gradients or the weights of local
learning models. Recent reconstruction attacks apply a gradient inversion
optimization on the gradient update of a single minibatch to reconstruct the
private data used by clients during training. As the state-of-the-art
reconstruction attacks solely focus on single update, realistic adversarial
scenarios are overlooked, such as observation across multiple updates and
updates trained from multiple mini-batches. A few studies consider a more
challenging adversarial scenario where only model updates based on multiple
mini-batches are observable, and resort to computationally expensive simulation
to untangle the underlying samples for each local step. In this paper, we
propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently
and effectively reconstructs images from both model or gradient updates, and
across multiple epochs. In a nutshell, AGIC (i) approximates gradient updates
of used training samples from model updates to avoid costly simulation
procedures, (ii) leverages gradient/model updates collected from multiple
epochs, and (iii) assigns increasing weights to layers with respect to the
neural network structure for reconstruction quality. We extensively evaluate
AGIC on three datasets, CIFAR-10, CIFAR-100 and ImageNet. Our results show that
AGIC increases the peak signal-to-noise ratio (PSNR) by up to 50% compared to
two representative state-of-the-art gradient inversion attacks. Furthermore,
AGIC is faster than the state-of-the-art simulation based attack, e.g., it is
5x faster when attacking FedAvg with 8 local steps in between model updates.
Related papers
- Federated Learning under Attack: Improving Gradient Inversion for Batch of Images [1.5749416770494706]
Federated Learning (FL) has emerged as a machine learning approach able to preserve the privacy of user's data.
Deep Leakage from Gradients with Feedback Blending (DLG-FB) is able to improve the inverting gradient attack.
arXiv Detail & Related papers (2024-09-26T12:02:36Z) - Recovering Labels from Local Updates in Federated Learning [14.866327821524854]
Gradient (GI) attacks present a threat to the privacy of clients in federated learning (FL)
We present a novel label recovery scheme, Recovering Labels from Local Updates (RLU)
RLU achieves high performance even in realistic real-world settings where an FL system run multiple local epochs train on heterogeneous data.
arXiv Detail & Related papers (2024-05-02T02:33:15Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized
Language Model Finetuning Using Shared Randomness [86.61582747039053]
Language model training in distributed settings is limited by the communication cost of exchanges.
We extend recent work using shared randomness to perform distributed fine-tuning with low bandwidth.
arXiv Detail & Related papers (2023-06-16T17:59:51Z) - Federated Adversarial Learning: A Framework with Convergence Analysis [28.136498729360504]
Federated learning (FL) is a trending training paradigm to utilize decentralized training data.
FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2022-08-07T04:17:34Z) - Scaling Private Deep Learning with Low-Rank and Sparse Gradients [5.14780936727027]
We propose a framework that exploits the low-rank and sparse structure of neural networks to reduce the dimension of gradient updates.
A novel strategy is utilized to sparsify the gradients, resulting in low-dimensional, less noisy updates.
Empirical evaluation on natural language processing and computer vision tasks shows that our method outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2022-07-06T14:09:47Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - A Transfer Learning approach to Heatmap Regression for Action Unit
intensity estimation [50.261472059743845]
Action Units (AUs) are geometrically-based atomic facial muscle movements.
We propose a novel AU modelling problem that consists of jointly estimating their localisation and intensity.
A Heatmap models whether an AU occurs or not at a given spatial location.
arXiv Detail & Related papers (2020-04-14T16:51:13Z) - Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad
Samples [67.11669996924671]
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm.
When updating the generator parameters, we zero out the gradient contributions from the elements of the batch that the critic scores as least realistic'
We show that this top-k update' procedure is a generally applicable improvement.
arXiv Detail & Related papers (2020-02-14T19:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.