Combining Variational Modeling with Partial Gradient Perturbation to
Prevent Deep Gradient Leakage
- URL: http://arxiv.org/abs/2208.04767v1
- Date: Tue, 9 Aug 2022 13:23:29 GMT
- Title: Combining Variational Modeling with Partial Gradient Perturbation to
Prevent Deep Gradient Leakage
- Authors: Daniel Scheliga and Patrick M\"ader and Marco Seeland
- Abstract summary: gradient inversion attacks are an ubiquitous threat in collaborative learning of neural networks.
Recent work proposed a PRivacy EnhanCing mODulE (PRECODE) based on PPPal modeling as extension for arbitrary model architectures.
In this work, we investigate the effect of PRECODE on gradient inversion attacks to reveal its underlying working principle.
We show that our approach requires less gradient perturbation to effectively preserve privacy without harming model performance.
- Score: 0.6021787236982659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploiting gradient leakage to reconstruct supposedly private training data,
gradient inversion attacks are an ubiquitous threat in collaborative learning
of neural networks. To prevent gradient leakage without suffering from severe
loss in model performance, recent work proposed a PRivacy EnhanCing mODulE
(PRECODE) based on variational modeling as extension for arbitrary model
architectures. In this work, we investigate the effect of PRECODE on gradient
inversion attacks to reveal its underlying working principle. We show that
variational modeling induces stochasticity on PRECODE's and its subsequent
layers' gradients that prevents gradient attacks from convergence. By
purposefully omitting those stochastic gradients during attack optimization, we
formulate an attack that can disable PRECODE's privacy preserving effects. To
ensure privacy preservation against such targeted attacks, we propose PRECODE
with Partial Perturbation (PPP), as strategic combination of variational
modeling and partial gradient perturbation. We conduct an extensive empirical
study on four seminal model architectures and two image classification
datasets. We find all architectures to be prone to gradient leakage, which can
be prevented by PPP. In result, we show that our approach requires less
gradient perturbation to effectively preserve privacy without harming model
performance.
Related papers
- Gradient Diffusion: A Perturbation-Resilient Gradient Leakage Attack [13.764770382623812]
gradient protection is a critical issue for Federated Learning (FL) training process.
We propose Perturbation-resilient Gradient Leakage Attack (PGLA)
Our insight is that capturing the disturbance level of perturbation during the diffusion reverse process can release the gradient denoising capability.
arXiv Detail & Related papers (2024-07-07T07:06:49Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks [2.1301560294088318]
Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
arXiv Detail & Related papers (2023-09-08T16:23:25Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Securing Distributed SGD against Gradient Leakage Threats [13.979995939926154]
This paper presents a holistic approach to gradient leakage resilient distributed gradient Descent (SGD)
We analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise.
We present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool.
arXiv Detail & Related papers (2023-05-10T21:39:27Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - Orthogonal Deep Models As Defense Against Black-Box Attacks [71.23669614195195]
We study the inherent weakness of deep models in black-box settings where the attacker may develop the attack using a model similar to the targeted model.
We introduce a novel gradient regularization scheme that encourages the internal representation of a deep model to be orthogonal to another.
We verify the effectiveness of our technique on a variety of large-scale models.
arXiv Detail & Related papers (2020-06-26T08:29:05Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.