CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
- URL: http://arxiv.org/abs/2501.15718v1
- Date: Mon, 27 Jan 2025 01:06:23 GMT
- Title: CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
- Authors: Kaiyuan Zhang, Siyuan Cheng, Guangyu Shen, Bruno Ribeiro, Shengwei An, Pin-Yu Chen, Xiangyu Zhang, Ninghui Li,
- Abstract summary: Federated learning collaboratively trains a neural network on a global server.
Each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors.
We present a novel defense tailored for large neural network models.
- Score: 63.07948989346385
- License:
- Abstract: Federated learning collaboratively trains a neural network on a global server, where each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data. The process of sending these model updates may leak client's private data information. Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors. Recently, researchers have proposed advanced gradient inversion techniques that existing defenses struggle to handle effectively. In this work, we present a novel defense tailored for large neural network models. Our defense capitalizes on the high dimensionality of the model parameters to perturb gradients within a subspace orthogonal to the original gradient. By leveraging cold posteriors over orthogonal subspaces, our defense implements a refined gradient update mechanism. This enables the selection of an optimal gradient that not only safeguards against gradient inversion attacks but also maintains model utility. We conduct comprehensive experiments across three different datasets and evaluate our defense against various state-of-the-art attacks and defenses. Code is available at https://censor-gradient.github.io.
Related papers
- Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning [21.892850886276317]
gradient purification defense, named GPD, integrates seamlessly with existing DFL aggregation to defend against poisoning attacks.
It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy.
It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
arXiv Detail & Related papers (2025-01-08T12:14:00Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks [2.1301560294088318]
Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
arXiv Detail & Related papers (2023-09-08T16:23:25Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Dropout is NOT All You Need to Prevent Gradient Leakage [0.6021787236982659]
We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
arXiv Detail & Related papers (2022-08-12T08:29:44Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z) - R-GAP: Recursive Gradient Attack on Privacy [5.687523225718642]
Federated learning is a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data.
We provide a closed-form recursion procedure to recover data from gradients in deep neural networks.
We also propose a Rank Analysis method to estimate the risk of gradient attacks inherent in certain network architectures.
arXiv Detail & Related papers (2020-10-15T13:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.