Watch Out for the Lifespan: Evaluating Backdoor Attacks Against Federated Model Adaptation
- URL: http://arxiv.org/abs/2511.14406v1
- Date: Tue, 18 Nov 2025 12:13:59 GMT
- Title: Watch Out for the Lifespan: Evaluating Backdoor Attacks Against Federated Model Adaptation
- Authors: Bastien Vuillod, Pierre-Alain Moellic, Jean-Max Dutertre,
- Abstract summary: Large models adaptation through Federated Learning (FL) addresses a wide range of use cases and is enabled by.<n>Efficient Fine-Tuning techniques such as Low-Rank Adaptation (LoRA)<n>We present the first analysis of the influence of LoRA on state-of-the-art backdoor attacks targeting model adaptation in FL.
- Score: 1.2744523252873352
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large models adaptation through Federated Learning (FL) addresses a wide range of use cases and is enabled by Parameter-Efficient Fine-Tuning techniques such as Low-Rank Adaptation (LoRA). However, this distributed learning paradigm faces several security threats, particularly to its integrity, such as backdoor attacks that aim to inject malicious behavior during the local training steps of certain clients. We present the first analysis of the influence of LoRA on state-of-the-art backdoor attacks targeting model adaptation in FL. Specifically, we focus on backdoor lifespan, a critical characteristic in FL, that can vary depending on the attack scenario and the attacker's ability to effectively inject the backdoor. A key finding in our experiments is that for an optimally injected backdoor, the backdoor persistence after the attack is longer when the LoRA's rank is lower. Importantly, our work highlights evaluation issues of backdoor attacks against FL and contributes to the development of more robust and fair evaluations of backdoor attacks, enhancing the reliability of risk assessments for critical FL systems. Our code is publicly available.
Related papers
- MARS: A Malignity-Aware Backdoor Defense in Federated Learning [51.77354308287098]
Recently proposed state-of-the-art (SOTA) attack, 3DFed, uses an indicator mechanism to determine whether backdoor models have been accepted by the defender.<n>We propose a Malignity-Aware backdooR defenSe (MARS) that leverages backdoor energy to indicate the malicious extent of each neuron.<n>Experiments demonstrate that MARS can defend against SOTA backdoor attacks and significantly outperforms existing defenses.
arXiv Detail & Related papers (2025-09-21T14:50:02Z) - Coward: Toward Practical Proactive Federated Backdoor Defense via Collision-based Watermark [90.94234374893287]
We introduce a new proactive defense, dubbed Coward, inspired by our discovery of multi-backdoor collision effects.<n>In general, we detect attackers by evaluating whether the server-injected, conflicting global watermark is erased during local training rather than retained.
arXiv Detail & Related papers (2025-08-04T06:51:33Z) - Mind the Cost of Scaffold! Benign Clients May Even Become Accomplices of Backdoor Attack [16.104941796138128]
BadSFL is the first backdoor attack targeting Scaffold.<n>It steers benign clients' local gradient updates towards the attacker's poisoned direction, effectively turning them into unwitting accomplices.<n>BadSFL achieves superior attack durability, maintaining effectiveness for over 60 global rounds, lasting up to three times longer than existing baselines.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift [104.76588209308666]
This paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains.<n>We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness.<n>We propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local
Ultimate Gradients Inspection [3.3711670942444014]
Federated learning (FL) enables multiple clients to train a model without compromising sensitive data.
The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training.
We propose FedGrad, a backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks.
arXiv Detail & Related papers (2023-04-29T19:31:44Z) - Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions [3.6086478979425998]
Federated learning (FL) is a machine learning (ML) approach that allows the use of distributed data without compromising personal privacy.
The heterogeneous distribution of data among clients in FL can make it difficult for the orchestration server to validate the integrity of local model updates.
Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients.
arXiv Detail & Related papers (2023-03-03T20:54:28Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.