Towards Verifiable Federated Unlearning: Framework, Challenges, and The Road Ahead
- URL: http://arxiv.org/abs/2510.00833v1
- Date: Wed, 01 Oct 2025 12:45:46 GMT
- Title: Towards Verifiable Federated Unlearning: Framework, Challenges, and The Road Ahead
- Authors: Thanh Linh Nguyen, Marcela Tuler de Oliveira, An Braeken, Aaron Yi Ding, Quoc-Viet Pham,
- Abstract summary: Federated unlearning (FUL) enables removing the data influence from the model trained across distributed clients.<n>This article introduces veriFUL, a reference framework for verifiable FUL that formalizes verification entities, goals, approaches, and metrics.
- Score: 6.530323505784683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated unlearning (FUL) enables removing the data influence from the model trained across distributed clients, upholding the right to be forgotten as mandated by privacy regulations. FUL facilitates a value exchange where clients gain privacy-preserving control over their data contributions, while service providers leverage decentralized computing and data freshness. However, this entire proposition is undermined because clients have no reliable way to verify that their data influence has been provably removed, as current metrics and simple notifications offer insufficient assurance. We envision unlearning verification becoming a pivotal and trust-by-design part of the FUL life-cycle development, essential for highly regulated and data-sensitive services and applications like healthcare. This article introduces veriFUL, a reference framework for verifiable FUL that formalizes verification entities, goals, approaches, and metrics. Specifically, we consolidate existing efforts and contribute new insights, concepts, and metrics to this domain. Finally, we highlight research challenges and identify potential applications and developments for verifiable FUL and veriFUL.
Related papers
- zkFL-Health: Blockchain-Enabled Zero-Knowledge Federated Learning for Medical AI Privacy [0.0]
zkFL-Health is an architecture that combines Federated Learning (FL) with zero-knowledge proofs (ZKPs) and Trusted Execution Environments (TEEs)<n>Clients locally train and commit their updates; the aggregator operates within a TEE to compute the global update and produces a succinct ZK proof that it used exactly the committed inputs and the correct aggregation rule, without revealing any client update to the host.<n>We outline system and threat models tailored to healthcare, the zkFL-Health protocol, security/privacy guarantees, and a performance evaluation plan spanning accuracy, privacy risk, latency, and cost.
arXiv Detail & Related papers (2025-12-24T08:29:28Z) - Stragglers Can Contribute More: Uncertainty-Aware Distillation for Asynchronous Federated Learning [61.249748418757946]
Asynchronous federated learning (FL) has recently gained attention for its enhanced efficiency and scalability.<n>We propose FedEcho, a novel framework that incorporates uncertainty-aware distillation to enhance the asynchronous FL performances.<n>We demonstrate that FedEcho consistently outperforms existing asynchronous federated learning baselines.
arXiv Detail & Related papers (2025-11-25T06:25:25Z) - Improving Regulatory Oversight in Online Content Moderation [2.1082552608122542]
The European Union introduced the Digital Services Act (DSA) to address the risks associated with digital platforms and promote a safer online environment.<n>Despite the potential of components such as the Transparency Database, Transparency Reports, and Article 40 of the DSA to improve platform transparency, significant challenges remain.<n>These include data inconsistencies and a lack of detailed information, which hinder transparency in content moderation practices.
arXiv Detail & Related papers (2025-06-04T16:38:25Z) - DP-RTFL: Differentially Private Resilient Temporal Federated Learning for Trustworthy AI in Regulated Industries [0.0]
This paper introduces Differentially Private Resilient Temporal Federated Learning (DP-RTFL)<n>It is designed to ensure training continuity, precise state recovery, and strong data privacy.<n>The framework is particularly suited for critical applications like credit risk assessment using sensitive financial data.
arXiv Detail & Related papers (2025-05-27T16:30:25Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)<n>FedE4RAG facilitates collaborative training of client-side RAG retrieval models.<n>We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI [6.671649946926508]
We present the first unified large-scale empirical study of privacy-fairness-utility trade-offs in Federated Learning (FL)<n>We compare fairness-awares with Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Encryption (SMC)<n>We uncover unexpected interactions: DP mechanisms can negatively impact fairness, skew, and fairness-awares can inadvertently reduce privacy effectiveness.
arXiv Detail & Related papers (2025-03-20T15:31:01Z) - Setting the Course, but Forgetting to Steer: Analyzing Compliance with GDPR's Right of Access to Data by Instagram, TikTok, and YouTube [9.304421724270828]
The Right of Access aims to empower users with control over their personal data via Data Download Packages (DDPs)<n>This paper conducts a comprehensive audit of DDPs from three social media platforms (TikTok, Instagram, and YouTube) to systematically assess these critical drawbacks.
arXiv Detail & Related papers (2025-02-16T17:15:11Z) - Privacy-Preserving Verifiable Neural Network Inference Service [4.131956503199438]
We develop a privacy-preserving and verifiable CNN inference scheme that preserves privacy for client data samples.
vPIN achieves high efficiency in terms of proof size, while providing client data privacy guarantees and provable verifiability.
arXiv Detail & Related papers (2024-11-12T01:09:52Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.