DRAUN: An Algorithm-Agnostic Data Reconstruction Attack on Federated Unlearning Systems
- URL: http://arxiv.org/abs/2506.01777v1
- Date: Mon, 02 Jun 2025 15:20:54 GMT
- Title: DRAUN: An Algorithm-Agnostic Data Reconstruction Attack on Federated Unlearning Systems
- Authors: Hithem Lamri, Manaar Alam, Haiyan Jiang, Michail Maniatakos,
- Abstract summary: Unlearning (FU) enables clients to remove the influence of specific data from a collaboratively trained global model.<n>A malicious server may exploit unlearning updates to reconstruct the data requested for removal.<n>This work presents DRAUN, the first attack framework to unlearned data in FU systems.
- Score: 6.792248470703829
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Unlearning (FU) enables clients to remove the influence of specific data from a collaboratively trained shared global model, addressing regulatory requirements such as GDPR and CCPA. However, this unlearning process introduces a new privacy risk: A malicious server may exploit unlearning updates to reconstruct the data requested for removal, a form of Data Reconstruction Attack (DRA). While DRAs for machine unlearning have been studied extensively in centralized Machine Learning-as-a-Service (MLaaS) settings, their applicability to FU remains unclear due to the decentralized, client-driven nature of FU. This work presents DRAUN, the first attack framework to reconstruct unlearned data in FU systems. DRAUN targets optimization-based unlearning methods, which are widely adopted for their efficiency. We theoretically demonstrate why existing DRAs targeting machine unlearning in MLaaS fail in FU and show how DRAUN overcomes these limitations. We validate our approach through extensive experiments on four datasets and four model architectures, evaluating its performance against five popular unlearning methods, effectively demonstrating that state-of-the-art FU methods remain vulnerable to DRAs.
Related papers
- FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery [7.9641700582177934]
Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request.<n>We propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery.
arXiv Detail & Related papers (2026-01-30T05:36:31Z) - Certified Unlearning in Decentralized Federated Learning [24.229643475639293]
In decentralized federated learning (DFL), clients exchange local updates only with neighbors, causing model information to propagate and mix across the network.<n>We propose a novel certified unlearning framework for DFL based on Newton-style updates.
arXiv Detail & Related papers (2026-01-10T05:39:24Z) - ToFU: Transforming How Federated Learning Systems Forget User Data [3.143298944776905]
Neural networks unintentionally memorize training data, creating privacy risks in federated learning (FL) systems.<n>We propose a learning-to-unlearn Transformation-guided Federated Unlearning (ToFU) framework that incorporates transformations during the learning process to reduce memorization of specific instances.<n>ToFU can work as a plug-and-play framework that improves the performance of existing Federated Unlearning methods.
arXiv Detail & Related papers (2025-09-19T10:54:25Z) - DRAGD: A Federated Unlearning Data Reconstruction Attack Based on Gradient Differences [13.513041254208186]
Federated unlearning enables machine learning while preserving data privacy.<n>The gradient exchanges during the unlearning process can leak sensitive information about deleted data.<n>We introduce DRAGD, a novel attack that exploits gradient discrepancies before and after unlearning to reconstruct forgotten data.<n>We also present DRAGDP, an enhanced version of DRAGD that leverages publicly available prior data to improve reconstruction accuracy.
arXiv Detail & Related papers (2025-07-13T12:16:43Z) - RESTOR: Knowledge Recovery in Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can contain private or sensitive information.<n>Several machine unlearning algorithms have been proposed to eliminate the effect of such datapoints.<n>We propose the RESTOR framework for machine unlearning evaluation.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning [7.557226714828334]
We present a novel unlearning mechanism designed to remove the impact of specific data samples from a neural network.
In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model.
Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task.
arXiv Detail & Related papers (2024-07-01T00:20:26Z) - Privacy-Preserving Federated Unlearning with Certified Client Removal [18.36632825624581]
State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models.
We propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers.
arXiv Detail & Related papers (2024-04-15T12:27:07Z) - Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning [16.809644622465086]
We conduct the first investigation to understand the extent to which machine unlearning can leak the confidential content of unlearned data.
Under the Machine Learning as a Service setting, we propose unlearning inversion attacks that can reveal the feature and label information of an unlearned sample.
The experimental results indicate that the proposed attack can reveal the sensitive information of the unlearned data.
arXiv Detail & Related papers (2024-04-04T06:37:46Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services [31.347825826778276]
We try to explore the potential threats posed by unlearning services in Machine Learning (ML)
We propose two strategies that leverage over-unlearning to measure the impact on the trade-off balancing.
Results indicate significant potential for both strategies to undermine model efficacy in unlearning scenarios.
arXiv Detail & Related papers (2023-09-15T08:00:45Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.