Model Inversion Attack against Federated Unlearning
- URL: http://arxiv.org/abs/2502.14558v3
- Date: Tue, 08 Apr 2025 15:54:55 GMT
- Title: Model Inversion Attack against Federated Unlearning
- Authors: Lei Zhou, Youwen Zhu,
- Abstract summary: In this paper, we propose the federated unlearning inversion attack (FUIA)<n>FUIA is specifically designed for the three types of FU (sample unlearning, client unlearning, and class unlearning)<n>It significantly leaks the privacy of forgotten data and can target all types of FU.
- Score: 7.208310705506839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the introduction of regulations related to the ``right to be forgotten", federated learning (FL) is facing new privacy compliance challenges. To address these challenges, researchers have proposed federated unlearning (FU). However, existing FU research has primarily focused on improving the efficiency of unlearning, with less attention paid to the potential privacy vulnerabilities inherent in these methods. To address this gap, we draw inspiration from gradient inversion attacks in FL and propose the federated unlearning inversion attack (FUIA). The FUIA is specifically designed for the three types of FU (sample unlearning, client unlearning, and class unlearning), aiming to provide a comprehensive analysis of the privacy leakage risks associated with FU. In FUIA, the server acts as an honest-but-curious attacker, recording and exploiting the model differences before and after unlearning to expose the features and labels of forgotten data. FUIA significantly leaks the privacy of forgotten data and can target all types of FU. This attack contradicts the goal of FU to eliminate specific data influence, instead exploiting its vulnerabilities to recover forgotten data and expose its privacy flaws. Extensive experimental results show that FUIA can effectively reveal the private information of forgotten data. To mitigate this privacy leakage, we also explore two potential defense methods, although these come at the cost of reduced unlearning effectiveness and the usability of the unlearned model.
Related papers
- SpooFL: Spoofing Federated Learning [54.05993847488204]
We introduce a spoofing-based defense that deceives attackers into believing they have recovered the true training data.<n>Unlike prior synthetic-data defenses that share classes or distributions with the private data, SFL uses a state-of-the-art generative model trained on an external dataset.<n>As a result, attackers are misled into recovering plausible yet completely irrelevant samples, preventing meaningful data leakage.
arXiv Detail & Related papers (2026-01-21T14:57:18Z) - EFU: Enforcing Federated Unlearning via Functional Encryption [3.7766323073490216]
Federated unlearning (FU) algorithms allow clients in federated settings to exercise their ''right to be forgotten''<n>Existing FU methods maintain data privacy by performing unlearning locally on the client-side and sending targeted updates to the server without exposing forgotten data.<n>We propose EFU (Enforced Federated Unlearning), a cryptographically enforced FU framework that enables clients to initiate unlearning while concealing its occurrence from the server.
arXiv Detail & Related papers (2025-08-11T11:44:21Z) - CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning [14.061404670832097]
An effective unlearning method removes the information of the specified data from the trained model, resulting in different outputs for the same input before and after unlearning.
We introduce a Compressive Representation Forgetting Unlearning scheme (CRFU) to safeguard against privacy leakage on unlearning.
arXiv Detail & Related papers (2025-02-27T05:59:02Z) - Efficient Federated Unlearning with Adaptive Differential Privacy Preservation [15.8083997286637]
Federated unlearning (FU) offers a promising solution to erase the impact of specific clients' data on the global model in federated learning (FL)
Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates.
We propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU.
arXiv Detail & Related papers (2024-11-17T11:45:15Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Game-Theoretic Machine Unlearning: Mitigating Extra Privacy Leakage [12.737028324709609]
Recent legislation obligates organizations to remove requested data and its influence from a trained model.
We propose a game-theoretic machine unlearning algorithm that simulates the competitive relationship between unlearning performance and privacy protection.
arXiv Detail & Related papers (2024-11-06T13:47:04Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Exploring Federated Learning Dynamics for Black-and-White-Box DNN Traitor Tracing [49.1574468325115]
This paper explores the adaptation of black-and-white traitor tracing watermarking to Federated Learning.
Results show that collusion-resistant traitor tracing, identifying all data-owners involved in a suspected leak, is feasible in an FL framework, even in early stages of training.
arXiv Detail & Related papers (2024-07-02T09:54:35Z) - Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey [27.859861825159342]
Deep learning has shown incredible potential across a vast array of tasks.
Recent concerns on privacy have further highlighted challenges for accessing such data.
Federated learning has emerged as an important privacy-preserving technology.
arXiv Detail & Related papers (2024-05-06T16:55:20Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - TOFU: A Task of Fictitious Unlearning for LLMs [99.92305790945507]
Large language models trained on massive corpora of data from the web can reproduce sensitive or private data raising both legal and ethical concerns.
Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training.
We present TOFU, a benchmark aimed at helping deepen our understanding of unlearning.
arXiv Detail & Related papers (2024-01-11T18:57:12Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.<n>This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.<n>We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering [2.2194815687410627]
We show how a malicious client can leak the privacy-sensitive data of some other users in FL even without any cooperation from the server.
Our best-performing method improves the membership inference recall by 29% and achieves up to 71% private data reconstruction.
arXiv Detail & Related papers (2023-10-24T19:50:01Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.