AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning
- URL: http://arxiv.org/abs/2403.08383v3
- Date: Wed, 31 Jul 2024 08:57:57 GMT
- Title: AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning
- Authors: Can Liu, Jin Wang, and Yipeng Zhou, Yachao Yuan, Quanzheng Sheng, Kejie Lu,
- Abstract summary: Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients.
Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data.
We present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components.
- Score: 13.104809524506132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients. Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data such as images based on model gradients. However, reconstructing high-resolution images by existing GIAs faces two challenges: inferior accuracy and slow-convergence, especially when duplicating labels exist in the training batch. To address these challenges, we present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components: Label Recovery Block (LRB) which can accurately restore duplicating labels of private images based on exposed gradients; VME Regularization Term, which includes the total variance of reconstructed images, the discrepancy between three-channel means and edges, between values from exposed gradients and reconstructed images, respectively. The AFGI can be regarded as a white-box attack strategy to reconstruct images by leveraging labels recovered by LRB. In particular, AFGI is efficient that accurately reconstruct ground-truth images when users' training batch size is up to 48. Our experimental results manifest that AFGI can diminish 85% time costs while achieving superb inversion quality in the ImageNet dataset. At last, our study unveils the shortcomings of FL in privacy-preservation, prompting the development of more advanced countermeasure strategies.
Related papers
- Exploring User-level Gradient Inversion with a Diffusion Prior [17.2657358645072]
We propose a novel gradient inversion attack that applies a denoising diffusion model as a strong image prior to enhance recovery in the large batch setting.
Unlike traditional attacks, which aim to reconstruct individual samples and suffer at large batch and image sizes, our approach instead aims to recover a representative image that captures the sensitive shared semantic information corresponding to the underlying user.
arXiv Detail & Related papers (2024-09-11T14:20:47Z) - MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning [6.721419921063687]
We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets.
Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.
arXiv Detail & Related papers (2024-03-13T06:34:49Z) - BFRFormer: Transformer-based generator for Real-World Blind Face
Restoration [37.77996097891398]
We propose a Transformer-based blind face restoration method, named BFRFormer, to reconstruct images with more identity-preserved details in an end-to-end manner.
Our method outperforms state-of-the-art methods on a synthetic dataset and four real-world datasets.
arXiv Detail & Related papers (2024-02-29T02:31:54Z) - Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks [88.12362924175741]
Gradient inversion attacks aim to reconstruct local training data from intermediate gradients exposed in the federated learning framework.
Previous methods, starting from reconstructing a single data point and then relaxing the single-image limit to batch level, are only tested under hard label constraints.
We are the first to initiate a novel algorithm to simultaneously recover the ground-truth augmented label and the input feature of the last fully-connected layer from single-input gradients.
arXiv Detail & Related papers (2024-02-05T15:51:34Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - LTT-GAN: Looking Through Turbulence by Inverting GANs [86.25869403782957]
We propose the first turbulence mitigation method that makes use of visual priors encapsulated by a well-trained GAN.
Based on the visual priors, we propose to learn to preserve the identity of restored images on a periodic contextual distance.
Our method significantly outperforms prior art in both the visual quality and face verification accuracy of restored results.
arXiv Detail & Related papers (2021-12-04T16:42:13Z) - Towards General Deep Leakage in Federated Learning [13.643899029738474]
federated learning (FL) improves the performance of the global model by sharing and aggregating local models rather than local data to protect the users' privacy.
Some research has demonstrated that an attacker can still recover private data based on the shared gradient information.
We propose methods that can reconstruct the training data from shared gradients or weights, corresponding to the FedSGD and FedAvg usage scenarios.
arXiv Detail & Related papers (2021-10-18T07:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.