FedUP: Efficient Pruning-based Federated Unlearning for Model Poisoning Attacks
- URL: http://arxiv.org/abs/2508.13853v1
- Date: Tue, 19 Aug 2025 14:14:58 GMT
- Title: FedUP: Efficient Pruning-based Federated Unlearning for Model Poisoning Attacks
- Authors: Nicolò Romandini, Cristian Borcea, Rebecca Montanari, Luca Foschini,
- Abstract summary: Federated Unlearning (FU) is emerging as a solution to address such vulnerabilities.<n>This work presents FedUP, a lightweight FU algorithm designed to efficiently mitigate malicious clients' influence.<n>In all scenarios, FedUP reduces malicious influence, lowering accuracy on malicious data to match that of a model retrained from scratch.
- Score: 4.146693948527548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) can be vulnerable to attacks, such as model poisoning, where adversaries send malicious local weights to compromise the global model. Federated Unlearning (FU) is emerging as a solution to address such vulnerabilities by selectively removing the influence of detected malicious contributors on the global model without complete retraining. However, unlike typical FU scenarios where clients are trusted and cooperative, applying FU with malicious and possibly colluding clients is challenging because their collaboration in unlearning their data cannot be assumed. This work presents FedUP, a lightweight FU algorithm designed to efficiently mitigate malicious clients' influence by pruning specific connections within the attacked model. Our approach achieves efficiency by relying only on clients' weights from the last training round before unlearning to identify which connections to inhibit. Isolating malicious influence is non-trivial due to overlapping updates from benign and malicious clients. FedUP addresses this by carefully selecting and zeroing the highest magnitude weights that diverge the most between the latest updates from benign and malicious clients while preserving benign information. FedUP is evaluated under a strong adversarial threat model, where up to 50%-1 of the clients could be malicious and have full knowledge of the aggregation process. We demonstrate the effectiveness, robustness, and efficiency of our solution through experiments across IID and Non-IID data, under label-flipping and backdoor attacks, and by comparing it with state-of-the-art (SOTA) FU solutions. In all scenarios, FedUP reduces malicious influence, lowering accuracy on malicious data to match that of a model retrained from scratch while preserving performance on benign data. FedUP achieves effective unlearning while consistently being faster and saving storage compared to the SOTA.
Related papers
- ProtegoFed: Backdoor-Free Federated Instruction Tuning with Interspersed Poisoned Data [50.142067708131826]
Federated Instruction Tuning (FIT) enables collaborative instruction tuning of large language models across multiple organizations (clients) in a cross-silo setting without requiring the sharing of private instructions.<n>Recent findings suggest that poisoned samples may be pervasive and inadvertently embedded in real-world datasets, potentially distributed across all clients, even if the clients are benign.<n>This paper introduces ProtegoFed, the first backdoor-free FIT framework that accurately detects, purifies, and even interspersed poisoned data across clients during the training.
arXiv Detail & Related papers (2026-02-28T07:25:32Z) - Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense Framework [0.6554326244334868]
Federated Learning (FL) enables collaborative model training across decentralized devices without sharing raw data.<n>We propose a privacy-preserving defense framework that leverages a Conditional Generative Adversarial Network (cGAN) to generate synthetic data at the server for authenticating client updates.
arXiv Detail & Related papers (2025-03-26T18:00:56Z) - Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense [3.685395311534351]
Federated Learning (FL) is a distributed machine learning diagram that enables multiple clients to collaboratively train a global model without sharing their private local data.
FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning.
Existing defense methods typically focus on mitigating specific types of poisoning and are often ineffective against unseen types of attack.
arXiv Detail & Related papers (2024-08-05T20:27:45Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning [4.907460152017894]
Federated Learning (FL) is a collaborative learning paradigm enabling participants to collectively train a shared machine learning model.
Current FL defense strategies against data poisoning attacks either involve a trade-off between accuracy and robustness.
We present FedZZ, which harnesses a zone-based deviating update (ZBDU) mechanism to effectively counter data poisoning attacks in FL.
arXiv Detail & Related papers (2024-04-05T14:37:49Z) - Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing [6.957420925496431]
Federated learning (FL) allows training machine learning models on distributed data without compromising privacy.
FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model.
In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks.
arXiv Detail & Related papers (2024-03-19T19:15:38Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated learning is a distributed framework designed to address privacy concerns.<n>It introduces new attack surfaces, which are especially prone when data is non-Independently and Identically Distributed.<n>We present FedCC, a simple yet effective novel defense algorithm against model poisoning attacks.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Performance Weighting for Robust Federated Learning Against Corrupted
Sources [1.76179873429447]
Federated learning has emerged as a dominant computational paradigm for distributed machine learning.
In real-world applications, a federated environment may consist of a mixture of benevolent and malicious clients.
We show that the standard global aggregation scheme of local weights is inefficient in the presence of corrupted clients.
arXiv Detail & Related papers (2022-05-02T20:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.