Gradient-Leakage Resilient Federated Learning
- URL: http://arxiv.org/abs/2107.01154v1
- Date: Fri, 2 Jul 2021 15:51:07 GMT
- Title: Gradient-Leakage Resilient Federated Learning
- Authors: Wenqi Wei, Ling Liu, Yanzhao Wu, Gong Su, Arun Iyengar
- Abstract summary: Federated learning(FL) is an emerging distributed learning paradigm with default client privacy.
Recent studies reveal that gradient leakages in FL may compromise the privacy of client training data.
This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy.
- Score: 8.945356237213007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning(FL) is an emerging distributed learning paradigm with
default client privacy because clients can keep sensitive data on their devices
and only share local training parameter updates with the federated server.
However, recent studies reveal that gradient leakages in FL may compromise the
privacy of client training data. This paper presents a gradient leakage
resilient approach to privacy-preserving federated learning with per training
example-based client differential privacy, coined as Fed-CDP. It makes three
original contributions. First, we identify three types of client gradient
leakage threats in federated learning even with encrypted client-server
communications. We articulate when and why the conventional server coordinated
differential privacy approach, coined as Fed-SDP, is insufficient to protect
the privacy of the training data. Second, we introduce Fed-CDP, the per
example-based client differential privacy algorithm, and provide a formal
analysis of Fed-CDP with the $(\epsilon, \delta)$ differential privacy
guarantee, and a formal comparison between Fed-CDP and Fed-SDP in terms of
privacy accounting. Third, we formally analyze the privacy-utility trade-off
for providing differential privacy guarantee by Fed-CDP and present a dynamic
decay noise-injection policy to further improve the accuracy and resiliency of
Fed-CDP. We evaluate and compare Fed-CDP and Fed-CDP(decay) with Fed-SDP in
terms of differential privacy guarantee and gradient leakage resilience over
five benchmark datasets. The results show that the Fed-CDP approach outperforms
conventional Fed-SDP in terms of resilience to client gradient leakages while
offering competitive accuracy performance in federated learning.
Related papers
- VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption [3.329039715890632]
Federated learning is a promising distributed learning paradigm that enables collaborative model training without exposing local client data.<n>The distributed nature of federated learning makes it particularly vulnerable to attacks raised by malicious clients.<n>This paper proposes a privacy-preserving federated learning framework based on verifiable functional encryption.
arXiv Detail & Related papers (2025-06-15T13:38:40Z) - FedRE: Robust and Effective Federated Learning with Privacy Preference [20.969342596181246]
Federated Learning (FL) employs gradient aggregation at the server for distributed training to prevent the privacy leakage of raw data.<n>Private information can still be divulged through the analysis of uploaded gradients from clients.<n>Existing methods fail to take practical issues into account by merely perturbing each sample with the same mechanism.
arXiv Detail & Related papers (2025-05-08T01:50:27Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)
FedE4RAG facilitates collaborative training of client-side RAG retrieval models.
We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning [17.853502904387376]
Federated Learning (FL) enables collaborative training of models across distributed clients without sharing local data, addressing privacy concerns in decentralized systems.
We propose Federated Error Minimization (FedEM), a novel algorithm that incorporates controlled perturbations through adaptive noise injection.
Experimental results on benchmark datasets demonstrate that FedEM significantly reduces privacy risks and preserves model accuracy, achieving a robust balance between privacy protection and utility preservation.
arXiv Detail & Related papers (2025-03-08T02:48:00Z) - Federated Instruction Tuning of LLMs with Domain Coverage Augmentation [35.54111318340366]
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with server-side public data for instruction augmentation.
We propose FedDCA, which optimize domain coverage through greedy client center selection and retrieval-based augmentation.
We also investigate privacy preservation against memory extraction attacks utilizing various amounts of public data.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning [3.9166000694570076]
Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation.
DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks.
This work proposes three secret sharing-based dropout resilience approaches for privacy-preserving DL.
arXiv Detail & Related papers (2024-04-27T19:17:02Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.