FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery
- URL: http://arxiv.org/abs/2601.22589v1
- Date: Fri, 30 Jan 2026 05:36:31 GMT
- Title: FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery
- Authors: Yue Li, Mingmin Chu, Xilei Yang, Da Xiao, Ziqi Xu, Wei Shao, Qipeng Song, Hui Li,
- Abstract summary: Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request.<n>We propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery.
- Score: 7.9641700582177934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request. Retraining a federated model from scratch is prohibitively expensive, motivating federated unlearning (FU). However, existing FU methods suffer from high unlearning overhead, utility degradation caused by entangled knowledge, and unintended relearning during post-unlearning recovery. In this paper, we propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery. FedCARE leverages gradient ascent for efficient forgetting when target data are locally available and employs data free model inversion to construct class level proxies of shared knowledge. Based on these insights, FedCARE integrates a pseudo-sample generator, conflict-aware projected gradient ascent for utility preserving unlearning, and a recovery strategy that suppresses rollback toward the pre-unlearning model. FedCARE supports client, instance, and class level unlearning with modest overhead. Extensive experiments on multiple datasets and model architectures under both IID and non-IID settings show that FedCARE achieves effective forgetting, improved utility retention, and reduced relearning risk compared to state of the art FU baselines.
Related papers
- Deep Leakage with Generative Flow Matching Denoiser [54.05993847488204]
We introduce a new deep leakage (DL) attack that integrates a generative Flow Matching (FM) prior into the reconstruction process.<n>Our approach consistently outperforms state-of-the-art attacks across pixel-level, perceptual, and feature-based similarity metrics.
arXiv Detail & Related papers (2026-01-21T14:51:01Z) - ToFU: Transforming How Federated Learning Systems Forget User Data [3.143298944776905]
Neural networks unintentionally memorize training data, creating privacy risks in federated learning (FL) systems.<n>We propose a learning-to-unlearn Transformation-guided Federated Unlearning (ToFU) framework that incorporates transformations during the learning process to reduce memorization of specific instances.<n>ToFU can work as a plug-and-play framework that improves the performance of existing Federated Unlearning methods.
arXiv Detail & Related papers (2025-09-19T10:54:25Z) - Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning [1.6818869309123574]
Federated learning (FL) enables drones to train machine learning models in a decentralized manner while preserving data privacy.<n> Federated unlearning (FU) mitigates these risks by eliminating adversarial data contributions.<n>This paper proposes sky of unlearning (SoUL), a federated unlearning framework that efficiently removes the influence of unlearned data while maintaining model performance.
arXiv Detail & Related papers (2025-04-02T13:07:30Z) - Federated Unlearning with Gradient Descent and Conflict Mitigation [11.263010875673492]
Federated Unlearning (FU) has been considered a promising way to remove data without full retraining.<n>We propose Federated Unlearning with Orthogonal Steepest Descent (FedOSD)
arXiv Detail & Related papers (2024-12-28T16:23:10Z) - Streamlined Federated Unlearning: Unite as One to Be Highly Efficient [12.467630082668254]
Recently, the enactment of right to be forgotten" laws and regulations has imposed new privacy requirements on federated learning (FL)<n>We propose a streamlined federated unlearning approach (SFU) aimed at effectively removing the influence of the target data while preserving the model performance on retained data without degradation.
arXiv Detail & Related papers (2024-11-28T12:52:48Z) - FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher [8.592348476398149]
Federated Learning (FL) systems enable the collaborative training of machine learning models without requiring centralized collection of individual data.<n>We propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the data to forget from an FL global model.
arXiv Detail & Related papers (2024-08-14T14:36:28Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Toward Smart Security Enhancement of Federated Learning Networks [109.20054130698797]
In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
arXiv Detail & Related papers (2020-08-19T08:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.