Securing Distributed SGD against Gradient Leakage Threats
- URL: http://arxiv.org/abs/2305.06473v1
- Date: Wed, 10 May 2023 21:39:27 GMT
- Title: Securing Distributed SGD against Gradient Leakage Threats
- Authors: Wenqi Wei, Ling Liu, Jingya Zhou, Ka-Ho Chow, and Yanzhao Wu
- Abstract summary: This paper presents a holistic approach to gradient leakage resilient distributed gradient Descent (SGD)
We analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise.
We present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool.
- Score: 13.979995939926154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a holistic approach to gradient leakage resilient
distributed Stochastic Gradient Descent (SGD). First, we analyze two types of
strategies for privacy-enhanced federated learning: (i) gradient pruning with
random selection or low-rank filtering and (ii) gradient perturbation with
additive random noise or differential privacy noise. We analyze the inherent
limitations of these approaches and their underlying impact on privacy
guarantee, model accuracy, and attack resilience. Next, we present a gradient
leakage resilient approach to securing distributed SGD in federated learning,
with differential privacy controlled noise as the tool. Unlike conventional
methods with the per-client federated noise injection and fixed noise parameter
strategy, our approach keeps track of the trend of per-example gradient
updates. It makes adaptive noise injection closely aligned throughout the
federated model training. Finally, we provide an empirical privacy analysis on
the privacy guarantee, model utility, and attack resilience of the proposed
approach. Extensive evaluation using five benchmark datasets demonstrates that
our gradient leakage resilient approach can outperform the state-of-the-art
methods with competitive accuracy performance, strong differential privacy
guarantee, and high resilience against gradient leakage attacks. The code
associated with this paper can be found:
https://github.com/git-disl/Fed-alphaCDP.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - A Theoretical Insight into Attack and Defense of Gradient Leakage in
Transformer [11.770915202449517]
The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients.
This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models.
arXiv Detail & Related papers (2023-11-22T09:58:01Z) - DP-SGD with weight clipping [1.0878040851638]
We present a novel approach that mitigates the bias arising from traditional gradient clipping.
By leveraging a public upper bound of the Lipschitz value of the current model and its current location within the search domain, we can achieve refined noise level adjustments.
arXiv Detail & Related papers (2023-10-27T09:17:15Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning [1.7205106391379026]
Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing.
This approach shows vulnerabilities when privacy inference attacks are applied to it.
In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture.
arXiv Detail & Related papers (2022-10-22T04:24:32Z) - Gradient Leakage Attack Resilient Deep Learning [7.893378392969824]
gradient leakage attacks are considered one of the wickedest privacy threats in deep learning.
Deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee.
This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy.
arXiv Detail & Related papers (2021-12-25T03:33:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.