BAPFL: Exploring Backdoor Attacks Against Prototype-based Federated Learning
- URL: http://arxiv.org/abs/2509.12964v1
- Date: Tue, 16 Sep 2025 11:15:19 GMT
- Title: BAPFL: Exploring Backdoor Attacks Against Prototype-based Federated Learning
- Authors: Honghong Zeng, Jiong Lou, Zhe Wang, Hefeng Zhou, Chentao Wu, Wei Zhao, Jie Li,
- Abstract summary: Prototype-based federated learning is inherently resistant to existing backdoor attacks.<n>We propose BAPFL, the first backdoor attack method specifically designed for PFL frameworks.<n>BAPFL achieves a 35%-75% improvement in attack success rate compared to traditional backdoor attacks.
- Score: 13.929771530924976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prototype-based federated learning (PFL) has emerged as a promising paradigm to address data heterogeneity problems in federated learning, as it leverages mean feature vectors as prototypes to enhance model generalization. However, its robustness against backdoor attacks remains largely unexplored. In this paper, we identify that PFL is inherently resistant to existing backdoor attacks due to its unique prototype learning mechanism and local data heterogeneity. To further explore the security of PFL, we propose BAPFL, the first backdoor attack method specifically designed for PFL frameworks. BAPFL integrates a prototype poisoning strategy with a trigger optimization mechanism. The prototype poisoning strategy manipulates the trajectories of global prototypes to mislead the prototype training of benign clients, pushing their local prototypes of clean samples away from the prototypes of trigger-embedded samples. Meanwhile, the trigger optimization mechanism learns a unique and stealthy trigger for each potential target label, and guides the prototypes of trigger-embedded samples to align closely with the global prototype of the target label. Experimental results across multiple datasets and PFL variants demonstrate that BAPFL achieves a 35\%-75\% improvement in attack success rate compared to traditional backdoor attacks, while preserving main task accuracy. These results highlight the effectiveness, stealthiness, and adaptability of BAPFL in PFL.
Related papers
- MARS: A Malignity-Aware Backdoor Defense in Federated Learning [51.77354308287098]
Recently proposed state-of-the-art (SOTA) attack, 3DFed, uses an indicator mechanism to determine whether backdoor models have been accepted by the defender.<n>We propose a Malignity-Aware backdooR defenSe (MARS) that leverages backdoor energy to indicate the malicious extent of each neuron.<n>Experiments demonstrate that MARS can defend against SOTA backdoor attacks and significantly outperforms existing defenses.
arXiv Detail & Related papers (2025-09-21T14:50:02Z) - On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy [12.421658400381082]
We propose a novel and model poisoning attack framework tailored for LDPFL settings.<n>Our approach is driven by the objective of maximizing the global training loss while adhering to local privacy constraints.<n>We evaluate our framework across three representative LDPFL protocols, three benchmark datasets, and two types of deep neural networks.
arXiv Detail & Related papers (2025-09-05T17:23:03Z) - Probabilistic Prototype Calibration of Vision-Language Models for Generalized Few-shot Semantic Segmentation [75.18058114915327]
Generalized Few-Shot Semanticnative (GFSS) aims to extend a segmentation model to novel classes with only a few annotated examples.<n>We propose FewCLIP, a probabilistic prototype calibration framework over multi-modal prototypes from the pretrained CLIP.<n>We show FewCLIP significantly outperforms state-of-the-art approaches across both GFSS and class-incremental setting.
arXiv Detail & Related papers (2025-06-28T18:36:22Z) - Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.<n>We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning [31.386836775526685]
We propose textitPFedBA, a stealthy and effective backdoor attack strategy applicable to PFL systems.
Our study sheds light on the subtle yet potent backdoor threats to PFL systems, urging the community to bolster defenses against emerging backdoor challenges.
arXiv Detail & Related papers (2024-06-10T12:14:05Z) - FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced
Contrastive Learning for Data and Model Heterogeneity in Federated Learning [18.916282151435727]
Heterogeneous Federated Learning (HtFL) has attracted attention due to its ability to support heterogeneous models and data.
We introduce a novel HtFL approach called FedTGP, which leverages our Adaptive-margin-enhanced Contrastive Learning (ACL) to learn Trainable Global Prototypes (TGP) on the server.
arXiv Detail & Related papers (2024-01-06T14:43:47Z) - You Can Backdoor Personalized Federated Learning [18.91908598410108]
Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario.
We propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies.
arXiv Detail & Related papers (2023-07-29T12:25:04Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Backdoor Defense in Federated Learning Using Differential Testing and
Outlier Detection [24.562359531692504]
We propose DifFense, an automated defense framework to protect an FL system from backdoor attacks.
Our detection method reduces the average backdoor accuracy of the global model to below 4% and achieves a false negative rate of zero.
arXiv Detail & Related papers (2022-02-21T17:13:03Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.