EFU: Enforcing Federated Unlearning via Functional Encryption
- URL: http://arxiv.org/abs/2508.07873v1
- Date: Mon, 11 Aug 2025 11:44:21 GMT
- Title: EFU: Enforcing Federated Unlearning via Functional Encryption
- Authors: Samaneh Mohammadi, Vasileios Tsouvalas, Iraklis Symeonidis, Ali Balador, Tanir Ozcelebi, Francesco Flammini, Nirvana Meratnia,
- Abstract summary: Federated unlearning (FU) algorithms allow clients in federated settings to exercise their ''right to be forgotten''<n>Existing FU methods maintain data privacy by performing unlearning locally on the client-side and sending targeted updates to the server without exposing forgotten data.<n>We propose EFU (Enforced Federated Unlearning), a cryptographically enforced FU framework that enables clients to initiate unlearning while concealing its occurrence from the server.
- Score: 3.7766323073490216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated unlearning (FU) algorithms allow clients in federated settings to exercise their ''right to be forgotten'' by removing the influence of their data from a collaboratively trained model. Existing FU methods maintain data privacy by performing unlearning locally on the client-side and sending targeted updates to the server without exposing forgotten data; yet they often rely on server-side cooperation, revealing the client's intent and identity without enforcement guarantees - compromising autonomy and unlearning privacy. In this work, we propose EFU (Enforced Federated Unlearning), a cryptographically enforced FU framework that enables clients to initiate unlearning while concealing its occurrence from the server. Specifically, EFU leverages functional encryption to bind encrypted updates to specific aggregation functions, ensuring the server can neither perform unauthorized computations nor detect or skip unlearning requests. To further mask behavioral and parameter shifts in the aggregated model, we incorporate auxiliary unlearning losses based on adversarial examples and parameter importance regularization. Extensive experiments show that EFU achieves near-random accuracy on forgotten data while maintaining performance comparable to full retraining across datasets and neural architectures - all while concealing unlearning intent from the server. Furthermore, we demonstrate that EFU is agnostic to the underlying unlearning algorithm, enabling secure, function-hiding, and verifiable unlearning for any client-side FU mechanism that issues targeted updates.
Related papers
- ToFU: Transforming How Federated Learning Systems Forget User Data [3.143298944776905]
Neural networks unintentionally memorize training data, creating privacy risks in federated learning (FL) systems.<n>We propose a learning-to-unlearn Transformation-guided Federated Unlearning (ToFU) framework that incorporates transformations during the learning process to reduce memorization of specific instances.<n>ToFU can work as a plug-and-play framework that improves the performance of existing Federated Unlearning methods.
arXiv Detail & Related papers (2025-09-19T10:54:25Z) - Model Inversion Attack against Federated Unlearning [7.208310705506839]
In this paper, we propose the federated unlearning inversion attack (FUIA)<n>FUIA is specifically designed for the three types of FU (sample unlearning, client unlearning, and class unlearning)<n>It significantly leaks the privacy of forgotten data and can target all types of FU.
arXiv Detail & Related papers (2025-02-20T13:38:36Z) - Efficient Federated Unlearning with Adaptive Differential Privacy Preservation [15.8083997286637]
Federated unlearning (FU) offers a promising solution to erase the impact of specific clients' data on the global model in federated learning (FL)
Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates.
We propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU.
arXiv Detail & Related papers (2024-11-17T11:45:15Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Warmup and Transfer Knowledge-Based Federated Learning Approach for IoT
Continuous Authentication [34.6454670154373]
We propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data.
Our experiments show a significant increase in user authentication accuracy while maintaining user privacy and data security.
arXiv Detail & Related papers (2022-11-10T15:51:04Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.