Privacy-Preserving Federated Learning Scheme with Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures
- URL: http://arxiv.org/abs/2506.23622v1
- Date: Mon, 30 Jun 2025 08:39:01 GMT
- Title: Privacy-Preserving Federated Learning Scheme with Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures
- Authors: Jiahui Wu, Fucai Luo, Tiecheng Sun, Haiyan Wang, Weizhe Zhang,
- Abstract summary: We propose an enhanced privacy-preserving and Byzantine-robust federated learning scheme.<n>Our scheme guarantees privacy preservation and resilience against model poisoning attacks.
- Score: 10.862166653863571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy-preserving federated learning schemes based on the setting of two honest-but-curious and non-colluding servers offer promising solutions in terms of security and efficiency. However, our investigation reveals that these schemes still suffer from privacy leakage when considering model poisoning attacks from malicious users. Specifically, we demonstrate that the privacy-preserving computation process for defending against model poisoning attacks inadvertently leaks privacy to one of the honest-but-curious servers, enabling it to access users' gradients in plaintext. To address both privacy leakage and model poisoning attacks, we propose an enhanced privacy-preserving and Byzantine-robust federated learning (PBFL) scheme, comprising three components: (1) a two-trapdoor fully homomorphic encryption (FHE) scheme to bolster users' privacy protection; (2) a novel secure normalization judgment method to preemptively thwart gradient poisoning; and (3) an innovative secure cosine similarity measurement method for detecting model poisoning attacks without compromising data privacy. Our scheme guarantees privacy preservation and resilience against model poisoning attacks, even in scenarios with heterogeneous, non-IID (Independently and Identically Distributed) datasets. Theoretical analyses substantiate the security and efficiency of our scheme, and extensive experiments corroborate the efficacy of our private attacks. Furthermore, the experimental results demonstrate that our scheme accelerates training speed while reducing communication overhead compared to the state-of-the-art PBFL schemes.
Related papers
- DP2Guard: A Lightweight and Byzantine-Robust Privacy-Preserving Federated Learning Scheme for Industrial IoT [37.44256772381154]
DP2Guard is a lightweight PPFL framework that enhances both privacy and robustness.<n>A hybrid defense strategy is proposed, which extracts gradient features using singular value decomposition and cosine similarity.<n>A trust score-based adaptive aggregation scheme adjusts client weights according to historical behavior.
arXiv Detail & Related papers (2025-07-22T01:06:39Z) - VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption [3.329039715890632]
Federated learning is a promising distributed learning paradigm that enables collaborative model training without exposing local client data.<n>The distributed nature of federated learning makes it particularly vulnerable to attacks raised by malicious clients.<n>This paper proposes a privacy-preserving federated learning framework based on verifiable functional encryption.
arXiv Detail & Related papers (2025-06-15T13:38:40Z) - Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning [10.102889257118145]
Federated learning (FL) is inherently susceptible to privacy breaches and poisoning attacks.<n>This paper introduces a Dual Defense Federated learning (DDFed) framework.<n>DDFed boosts privacy protection and mitigates poisoning attacks, without introducing new participant roles or disrupting the existing FL topology.
arXiv Detail & Related papers (2025-02-08T12:28:20Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Protecting Model Adaptation from Trojans in the Unlabeled Data [120.42853706967188]
This paper explores the potential trojan attacks on model adaptation launched by well-designed poisoning target data.<n>We propose a plug-and-play method named DiffAdapt, which can be seamlessly integrated with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - PROFL: A Privacy-Preserving Federated Learning Method with Stringent
Defense Against Poisoning Attacks [2.6487166137163007]
Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks.
We propose a novel privacy-preserving Byzantine-robust FL framework PROFL.
PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques.
arXiv Detail & Related papers (2023-12-02T06:34:37Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.