Gradient Leakage Attack Resilient Deep Learning
- URL: http://arxiv.org/abs/2112.13178v1
- Date: Sat, 25 Dec 2021 03:33:02 GMT
- Title: Gradient Leakage Attack Resilient Deep Learning
- Authors: Wenqi Wei and Ling Liu
- Abstract summary: gradient leakage attacks are considered one of the wickedest privacy threats in deep learning.
Deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee.
This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy.
- Score: 7.893378392969824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient leakage attacks are considered one of the wickedest privacy threats
in deep learning as attackers covertly spy gradient updates during iterative
training without compromising model training quality, and yet secretly
reconstruct sensitive training data using leaked gradients with high attack
success rate. Although deep learning with differential privacy is a defacto
standard for publishing deep learning models with differential privacy
guarantee, we show that differentially private algorithms with fixed privacy
parameters are vulnerable against gradient leakage attacks. This paper
investigates alternative approaches to gradient leakage resilient deep learning
with differential privacy (DP). First, we analyze existing implementation of
deep learning with differential privacy, which use fixed noise variance to
injects constant noise to the gradients in all layers using fixed privacy
parameters. Despite the DP guarantee provided, the method suffers from low
accuracy and is vulnerable to gradient leakage attacks. Second, we present a
gradient leakage resilient deep learning approach with differential privacy
guarantee by using dynamic privacy parameters. Unlike fixed-parameter
strategies that result in constant noise variance, different dynamic parameter
strategies present alternative techniques to introduce adaptive noise variance
and adaptive noise injection which are closely aligned to the trend of gradient
updates during differentially private model training. Finally, we describe four
complementary metrics to evaluate and compare alternative approaches.
Related papers
- DPSUR: Accelerating Differentially Private Stochastic Gradient Descent
Using Selective Update and Release [29.765896801370612]
This paper proposes Differentially Private training framework based on Selective Updates and Release.
The main challenges lie in two aspects -- privacy concerns, and gradient selection strategy for model update.
Experiments conducted on MNIST, FMNIST, CIFAR-10, and IMDB datasets show that DPSUR significantly outperforms previous works in terms of convergence speed.
arXiv Detail & Related papers (2023-11-23T15:19:30Z) - DP-SGD with weight clipping [1.0878040851638]
We present a novel approach that mitigates the bias arising from traditional gradient clipping.
By leveraging a public upper bound of the Lipschitz value of the current model and its current location within the search domain, we can achieve refined noise level adjustments.
arXiv Detail & Related papers (2023-10-27T09:17:15Z) - Securing Distributed SGD against Gradient Leakage Threats [13.979995939926154]
This paper presents a holistic approach to gradient leakage resilient distributed gradient Descent (SGD)
We analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise.
We present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool.
arXiv Detail & Related papers (2023-05-10T21:39:27Z) - Adap DP-FL: Differentially Private Federated Learning with Adaptive
Noise [30.005017338416327]
Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models.
Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much.
We propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL)
arXiv Detail & Related papers (2022-11-29T03:20:40Z) - On the Interaction Between Differential Privacy and Gradient Compression
in Deep Learning [55.22219308265945]
We study how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning.
We observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training.
arXiv Detail & Related papers (2022-11-01T20:28:45Z) - Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning [1.7205106391379026]
Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing.
This approach shows vulnerabilities when privacy inference attacks are applied to it.
In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture.
arXiv Detail & Related papers (2022-10-22T04:24:32Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.