VeriFi: Towards Verifiable Federated Unlearning
- URL: http://arxiv.org/abs/2205.12709v1
- Date: Wed, 25 May 2022 12:20:02 GMT
- Title: VeriFi: Towards Verifiable Federated Unlearning
- Authors: Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling
Ji, Peng Cheng, Jiming Chen
- Abstract summary: Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
- Score: 59.169431326438676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a collaborative learning paradigm where
participants jointly train a powerful model without sharing their private data.
One desirable property for FL is the implementation of the right to be
forgotten (RTBF), i.e., a leaving participant has the right to request to
delete its private data from the global model. However, unlearning itself may
not be enough to implement RTBF unless the unlearning effect can be
independently verified, an important aspect that has been overlooked in the
current literature. In this paper, we prompt the concept of verifiable
federated unlearning, and propose VeriFi, a unified framework integrating
federated unlearning and verification that allows systematic analysis of the
unlearning and quantification of its effect, with different combinations of
multiple unlearning and verification methods. In VeriFi, the leaving
participant is granted the right to verify (RTV), that is, the participant
notifies the server before leaving, then actively verifies the unlearning
effect in the next few communication rounds. The unlearning is done at the
server side immediately after receiving the leaving notification, while the
verification is done locally by the leaving participant via two steps: marking
(injecting carefully-designed markers to fingerprint the leaver) and checking
(examining the change of the global model's performance on the markers). Based
on VeriFi, we conduct the first systematic and large-scale study for verifiable
federated unlearning, considering 7 unlearning methods and 5 verification
methods. Particularly, we propose a more efficient and FL-friendly unlearning
method, and two more effective and robust non-invasive-verification methods. We
extensively evaluate VeriFi on 7 datasets and 4 types of deep learning models.
Our analysis establishes important empirical understandings for more
trustworthy federated unlearning.
Related papers
- Unlearning Clients, Features and Samples in Vertical Federated Learning [1.6124402884077915]
Vertical Learning (VFL) has received less attention from the research community.
In this paper, we explore unlearning in VFL from three perspectives: unlearning clients, unlearning features, and unlearning samples.
To unlearn clients and features we introduce VFU-KD which is based on knowledge distillation (KD) while to unlearn samples, VFU-GA is introduced which is based on gradient ascent.
arXiv Detail & Related papers (2025-01-23T14:10:02Z) - Vertical Federated Unlearning via Backdoor Certification [15.042986414487922]
VFL offers a novel paradigm in machine learning, enabling distinct entities to train models cooperatively while maintaining data privacy.
Recent privacy regulations emphasize an individual's emphright to be forgotten, which necessitates the ability for models to unlearn specific training data.
We introduce an innovative modification to traditional VFL by employing a mechanism that inverts the typical learning trajectory with the objective of extracting specific data contributions.
arXiv Detail & Related papers (2024-12-16T06:40:25Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.