GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization
- URL: http://arxiv.org/abs/2308.04699v2
- Date: Mon, 11 Sep 2023 02:00:51 GMT
- Title: GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization
- Authors: Hao Fang, Bin Chen, Xuan Wang, Zhi Wang, Shu-Tao Xia
- Abstract summary: Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
- Score: 52.55628139825667
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated Learning (FL) has recently emerged as a promising distributed
machine learning framework to preserve clients' privacy, by allowing multiple
clients to upload the gradients calculated from their local data to a central
server. Recent studies find that the exchanged gradients also take the risk of
privacy leakage, e.g., an attacker can invert the shared gradients and recover
sensitive data against an FL system by leveraging pre-trained generative
adversarial networks (GAN) as prior knowledge. However, performing gradient
inversion attacks in the latent space of the GAN model limits their expression
ability and generalizability. To tackle these challenges, we propose
\textbf{G}radient \textbf{I}nversion over \textbf{F}eature \textbf{D}omains
(GIFD), which disassembles the GAN model and searches the feature domains of
the intermediate layers. Instead of optimizing only over the initial latent
code, we progressively change the optimized layer, from the initial latent
space to intermediate layers closer to the output images. In addition, we
design a regularizer to avoid unreal image generation by adding a small ${l_1}$
ball constraint to the searching range. We also extend GIFD to the
out-of-distribution (OOD) setting, which weakens the assumption that the
training sets of GANs and FL tasks obey the same data distribution. Extensive
experiments demonstrate that our method can achieve pixel-level reconstruction
and is superior to the existing methods. Notably, GIFD also shows great
generalizability under different defense strategy settings and batch sizes.
Related papers
- MS$^3$D: A RG Flow-Based Regularization for GAN Training with Limited Data [16.574346252357653]
We propose a novel regularization method based on the idea of renormalization group (RG) in physics.
We show that our method can effectively enhance the performance and stability of GANs under limited data scenarios.
arXiv Detail & Related papers (2024-08-20T18:37:37Z) - A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks [43.98557963966335]
Model Inversion (MI) attacks aim to reconstruct privacy-sensitive training data from released models by utilizing output information.
Recent advances in generative adversarial networks (GANs) have contributed significantly to the improved performance of MI attacks.
We propose a novel method, Intermediate Features enhanced Generative Model Inversion (IF-GMI), which disassembles the GAN structure and exploits features between intermediate blocks.
arXiv Detail & Related papers (2024-07-18T19:16:22Z) - Gradient Inversion of Federated Diffusion Models [4.1355611383748005]
Diffusion models are becoming defector generative models, which generate exceptionally high-resolution image data.
In this paper, we study the privacy risk of gradient inversion attacks.
We propose a triple-optimization GIDM+ that coordinates the optimization of the unknown data.
arXiv Detail & Related papers (2024-05-30T18:00:03Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Recycling Model Updates in Federated Learning: Are Gradient Subspaces
Low-Rank? [26.055358499719027]
We propose the "Look-back Gradient Multiplier" (LBGM) algorithm, which exploits this low-rank property to enable gradient recycling.
We analytically characterize the convergence behavior of LBGM, revealing the nature of the trade-off between communication savings and model performance.
We show that LBGM is a general plug-and-play algorithm that can be used standalone or stacked on top of existing sparsification techniques for distributed model training.
arXiv Detail & Related papers (2022-02-01T09:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.