ToFU: Transforming How Federated Learning Systems Forget User Data
- URL: http://arxiv.org/abs/2509.15861v1
- Date: Fri, 19 Sep 2025 10:54:25 GMT
- Title: ToFU: Transforming How Federated Learning Systems Forget User Data
- Authors: Van-Tuan Tran, Hong-Hanh Nguyen-Le, Quoc-Viet Pham,
- Abstract summary: Neural networks unintentionally memorize training data, creating privacy risks in federated learning (FL) systems.<n>We propose a learning-to-unlearn Transformation-guided Federated Unlearning (ToFU) framework that incorporates transformations during the learning process to reduce memorization of specific instances.<n>ToFU can work as a plug-and-play framework that improves the performance of existing Federated Unlearning methods.
- Score: 3.143298944776905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks unintentionally memorize training data, creating privacy risks in federated learning (FL) systems, such as inference and reconstruction attacks on sensitive data. To mitigate these risks and to comply with privacy regulations, Federated Unlearning (FU) has been introduced to enable participants in FL systems to remove their data's influence from the global model. However, current FU methods primarily act post-hoc, struggling to efficiently erase information deeply memorized by neural networks. We argue that effective unlearning necessitates a paradigm shift: designing FL systems inherently amenable to forgetting. To this end, we propose a learning-to-unlearn Transformation-guided Federated Unlearning (ToFU) framework that incorporates transformations during the learning process to reduce memorization of specific instances. Our theoretical analysis reveals how transformation composition provably bounds instance-specific information, directly simplifying subsequent unlearning. Crucially, ToFU can work as a plug-and-play framework that improves the performance of existing FU methods. Experiments on CIFAR-10, CIFAR-100, and the MUFAC benchmark show that ToFU outperforms existing FU baselines, enhances performance when integrated with current methods, and reduces unlearning time.
Related papers
- FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery [7.9641700582177934]
Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request.<n>We propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery.
arXiv Detail & Related papers (2026-01-30T05:36:31Z) - DRAUN: An Algorithm-Agnostic Data Reconstruction Attack on Federated Unlearning Systems [6.792248470703829]
Unlearning (FU) enables clients to remove the influence of specific data from a collaboratively trained global model.<n>A malicious server may exploit unlearning updates to reconstruct the data requested for removal.<n>This work presents DRAUN, the first attack framework to unlearned data in FU systems.
arXiv Detail & Related papers (2025-06-02T15:20:54Z) - Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning [1.6818869309123574]
Federated learning (FL) enables drones to train machine learning models in a decentralized manner while preserving data privacy.<n> Federated unlearning (FU) mitigates these risks by eliminating adversarial data contributions.<n>This paper proposes sky of unlearning (SoUL), a federated unlearning framework that efficiently removes the influence of unlearned data while maintaining model performance.
arXiv Detail & Related papers (2025-04-02T13:07:30Z) - Accurate Forgetting for Heterogeneous Federated Continual Learning [89.08735771893608]
We propose a new concept accurate forgetting (AF) and develop a novel generative-replay methodMethodwhich selectively utilizes previous knowledge in federated networks.<n>We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge.
arXiv Detail & Related papers (2025-02-20T02:35:17Z) - Streamlined Federated Unlearning: Unite as One to Be Highly Efficient [12.467630082668254]
Recently, the enactment of right to be forgotten" laws and regulations has imposed new privacy requirements on federated learning (FL)<n>We propose a streamlined federated unlearning approach (SFU) aimed at effectively removing the influence of the target data while preserving the model performance on retained data without degradation.
arXiv Detail & Related papers (2024-11-28T12:52:48Z) - R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [65.04475956174959]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)<n>A significant challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.<n>This paper develops a physical layer framework for resilient SFL with large language models (LLMs) and vision language models (VLMs) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large language models (LLMs) have driven profound transformations in wireless networks.<n>Within wireless environments, the training of LLMs faces significant challenges related to security and privacy.<n>This paper presents a systematic analysis of the training stages of LLMs in wireless networks, including pre-training, instruction tuning, and alignment tuning.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - Forgettable Federated Linear Learning with Certified Data Unlearning [34.532114070245576]
Federated Unlearning (FU) has emerged to address demands for the "right to be forgotten"" and unlearning of the impact of poisoned clients without requiring retraining in FL.
Most FU algorithms require the cooperation of retained or target clients (clients to be unlearned), additional communication overhead and potential security risks.
We present FedRemoval, a certified, efficient, and secure unlearning strategy that enables the server to unlearn a target client without requiring client communication or adding additional storage.
arXiv Detail & Related papers (2023-06-03T23:53:57Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization [23.064896326146386]
Machine Unlearning (MU) aims at removing the contribution of a given data point from a training procedure.
While Federated Unlearning (FU) methods proposed, we propose SIFU (Sequential Informed Unlearning) as a new method.
arXiv Detail & Related papers (2022-11-21T17:15:46Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.