RAF-GI: Towards Robust, Accurate and Fast-Convergent Gradient Inversion
Attack in Federated Learning
- URL: http://arxiv.org/abs/2403.08383v1
- Date: Wed, 13 Mar 2024 09:48:04 GMT
- Title: RAF-GI: Towards Robust, Accurate and Fast-Convergent Gradient Inversion
Attack in Federated Learning
- Authors: Can Liu and Jin Wang and Dongyang Yu
- Abstract summary: We present a Robust, Accurate and Fast-convergent GI attack algorithm, called RAF-GI, with two components.
RAF-GI can diminish 94% time costs while achieving superb inversion quality in ImageNet dataset.
With a batch size of 1, RAF-GI exhibits a 7.89 higher Peak Signal-to-Noise Ratio (PSNR) compared to the state-of-the-art baselines.
- Score: 5.689524859498987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) empowers privacy-preservation in model training by
only exposing users' model gradients. Yet, FL users are susceptible to the
gradient inversion (GI) attack which can reconstruct ground-truth training data
such as images based on model gradients. However, reconstructing
high-resolution images by existing GI attack works faces two challenges:
inferior accuracy and slow-convergence, especially when the context is
complicated, e.g., the training batch size is much greater than 1 on each FL
user. To address these challenges, we present a Robust, Accurate and
Fast-convergent GI attack algorithm, called RAF-GI, with two components: 1)
Additional Convolution Block (ACB) which can restore labels with up to 20%
improvement compared with existing works; 2) Total variance, three-channel mEan
and cAnny edge detection regularization term (TEA), which is a white-box attack
strategy to reconstruct images based on labels inferred by ACB. Moreover,
RAF-GI is robust that can still accurately reconstruct ground-truth data when
the users' training batch size is no more than 48. Our experimental results
manifest that RAF-GI can diminish 94% time costs while achieving superb
inversion quality in ImageNet dataset. Notably, with a batch size of 1, RAF-GI
exhibits a 7.89 higher Peak Signal-to-Noise Ratio (PSNR) compared to the
state-of-the-art baselines.
Related papers
- Non-Visible Light Data Synthesis and Application: A Case Study for
Synthetic Aperture Radar Imagery [30.590315753622132]
We explore the "hidden" ability of large-scale pre-trained image generation models, such as Stable Diffusion and Imagen, in non-visible light domains.
We propose a 2-stage low-rank adaptation method, and we call it 2LoRA.
In the first stage, the model is adapted using aerial-view regular image data (whose structure matches SAR), followed by the second stage where the base model from the first stage is further adapted using SAR modality data.
arXiv Detail & Related papers (2023-11-29T09:48:01Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method
based on Fast Fourier Convolution and ConvNeXt [14.917290578644424]
Haze usually leads to deteriorated images with low contrast, color shift and structural distortion.
We propose a novel two branch network that leverages 2D discrete wavelete transform (DWT), fast Fourier convolution (FFC) residual block and a pretrained ConvNeXt model.
Our model is able to effectively explore global contextual information and produce images with better perceptual quality.
arXiv Detail & Related papers (2023-05-08T02:59:02Z) - Contrastive Feature Loss for Image Prediction [55.373404869092866]
Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.
We introduce an information theory based approach to measuring similarity between two images.
We show that our formulation boosts the perceptual realism of output images when used as a drop-in replacement for the L1 loss.
arXiv Detail & Related papers (2021-11-12T20:39:52Z) - Towards General Deep Leakage in Federated Learning [13.643899029738474]
federated learning (FL) improves the performance of the global model by sharing and aggregating local models rather than local data to protect the users' privacy.
Some research has demonstrated that an attacker can still recover private data based on the shared gradient information.
We propose methods that can reconstruct the training data from shared gradients or weights, corresponding to the FedSGD and FedAvg usage scenarios.
arXiv Detail & Related papers (2021-10-18T07:49:52Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Differentiable Augmentation for Data-Efficient GAN Training [48.920992130257595]
We propose DiffAugment, a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples.
Our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms.
arXiv Detail & Related papers (2020-06-18T17:59:01Z) - RAFT: Recurrent All-Pairs Field Transforms for Optical Flow [78.92562539905951]
We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow.
RAFT extracts per-pixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field.
RAFT achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-26T17:12:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.