Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning
- URL: http://arxiv.org/abs/2210.13457v1
- Date: Sat, 22 Oct 2022 04:24:32 GMT
- Title: Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning
- Authors: Pretom Roy Ovi, Emon Dey, Nirmalya Roy, Aryya Gangopadhyay
- Abstract summary: Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing.
This approach shows vulnerabilities when privacy inference attacks are applied to it.
In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture.
- Score: 1.7205106391379026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative model building among a large
number of participants without the need for explicit data sharing. But this
approach shows vulnerabilities when privacy inference attacks are applied to
it. In particular, in the event of a gradient leakage attack, which has a
higher success rate in retrieving sensitive data from the model gradients, FL
models are at higher risk due to the presence of communication in their
inherent architecture. The most alarming thing about this gradient leakage
attack is that it can be performed in such a covert way that it does not hamper
the training performance while the attackers backtrack from the gradients to
get information about the raw data. Two of the most common approaches proposed
as solutions to this issue are homomorphic encryption and adding noise with
differential privacy parameters. These two approaches suffer from two major
drawbacks. They are: the key generation process becomes tedious with the
increasing number of clients, and noise-based differential privacy suffers from
a significant drop in global model accuracy. As a countermeasure, we propose a
mixed-precision quantized FL scheme, and we empirically show that both of the
issues addressed above can be resolved. In addition, our approach can ensure
more robustness as different layers of the deep model are quantized with
different precision and quantization modes. We empirically proved the validity
of our method with three benchmark datasets and found a minimal accuracy drop
in the global model after applying quantization.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Advancing the Robustness of Large Language Models through Self-Denoised Smoothing [50.54276872204319]
Large language models (LLMs) have achieved significant success, but their vulnerability to adversarial perturbations has raised considerable concerns.
We propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions.
Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility.
arXiv Detail & Related papers (2024-04-18T15:47:00Z) - Securing Distributed SGD against Gradient Leakage Threats [13.979995939926154]
This paper presents a holistic approach to gradient leakage resilient distributed gradient Descent (SGD)
We analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise.
We present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool.
arXiv Detail & Related papers (2023-05-10T21:39:27Z) - Gradient Leakage Defense with Key-Lock Module for Federated Learning [14.411227689702997]
Federated Learning (FL) is a widely adopted privacy-preserving machine learning approach.
Recent findings reveal that privacy may be compromised and sensitive information potentially recovered from shared gradients.
We propose a new gradient leakage defense technique that secures arbitrary model architectures using a private key-lock module.
arXiv Detail & Related papers (2023-05-06T16:47:52Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Concealing Sensitive Samples against Gradient Leakage in Federated
Learning [41.43099791763444]
Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.
Recent studies expose the vulnerability of FL to model inversion attacks, where adversaries reconstruct users private data via eavesdropping on the shared gradient information.
We present a simple, yet effective defense strategy that obfuscates the gradients of the sensitive data with concealed samples.
arXiv Detail & Related papers (2022-09-13T04:19:35Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.