Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks
- URL: http://arxiv.org/abs/2309.04515v1
- Date: Fri, 8 Sep 2023 16:23:25 GMT
- Title: Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks
- Authors: Daniel Scheliga, Patrick M\"ader, Marco Seeland
- Abstract summary: Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
- Score: 2.1301560294088318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient inversion attacks are an ubiquitous threat in federated learning as
they exploit gradient leakage to reconstruct supposedly private training data.
Recent work has proposed to prevent gradient leakage without loss of model
utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on
variational modeling. Without further analysis, it was shown that PRECODE
successfully protects against gradient inversion attacks. In this paper, we
make multiple contributions. First, we investigate the effect of PRECODE on
gradient inversion attacks to reveal its underlying working principle. We show
that variational modeling introduces stochasticity into the gradients of
PRECODE and the subsequent layers in a neural network. The stochastic gradients
of these layers prevent iterative gradient inversion attacks from converging.
Second, we formulate an attack that disables the privacy preserving effect of
PRECODE by purposefully omitting stochastic gradients during attack
optimization. To preserve the privacy preserving effect of PRECODE, our
analysis reveals that variational modeling must be placed early in the network.
However, early placement of PRECODE is typically not feasible due to reduced
model utility and the exploding number of additional model parameters.
Therefore, as a third contribution, we propose a novel privacy module -- the
Convolutional Variational Bottleneck (CVB) -- that can be placed early in a
neural network without suffering from these drawbacks. We conduct an extensive
empirical study on three seminal model architectures and six image
classification datasets. We find that all architectures are susceptible to
gradient leakage attacks, which can be prevented by our proposed CVB. Compared
to PRECODE, we show that our novel privacy module requires fewer trainable
parameters, and thus computational and communication costs, to effectively
preserve privacy.
Related papers
- Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - DPSUR: Accelerating Differentially Private Stochastic Gradient Descent
Using Selective Update and Release [29.765896801370612]
This paper proposes Differentially Private training framework based on Selective Updates and Release.
The main challenges lie in two aspects -- privacy concerns, and gradient selection strategy for model update.
Experiments conducted on MNIST, FMNIST, CIFAR-10, and IMDB datasets show that DPSUR significantly outperforms previous works in terms of convergence speed.
arXiv Detail & Related papers (2023-11-23T15:19:30Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Dropout is NOT All You Need to Prevent Gradient Leakage [0.6021787236982659]
We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
arXiv Detail & Related papers (2022-08-12T08:29:44Z) - Combining Variational Modeling with Partial Gradient Perturbation to
Prevent Deep Gradient Leakage [0.6021787236982659]
gradient inversion attacks are an ubiquitous threat in collaborative learning of neural networks.
Recent work proposed a PRivacy EnhanCing mODulE (PRECODE) based on PPPal modeling as extension for arbitrary model architectures.
In this work, we investigate the effect of PRECODE on gradient inversion attacks to reveal its underlying working principle.
We show that our approach requires less gradient perturbation to effectively preserve privacy without harming model performance.
arXiv Detail & Related papers (2022-08-09T13:23:29Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.