VeriFi: Towards Verifiable Federated Unlearning
- URL: http://arxiv.org/abs/2205.12709v1
- Date: Wed, 25 May 2022 12:20:02 GMT
- Title: VeriFi: Towards Verifiable Federated Unlearning
- Authors: Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling
Ji, Peng Cheng, Jiming Chen
- Abstract summary: Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
- Score: 59.169431326438676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a collaborative learning paradigm where
participants jointly train a powerful model without sharing their private data.
One desirable property for FL is the implementation of the right to be
forgotten (RTBF), i.e., a leaving participant has the right to request to
delete its private data from the global model. However, unlearning itself may
not be enough to implement RTBF unless the unlearning effect can be
independently verified, an important aspect that has been overlooked in the
current literature. In this paper, we prompt the concept of verifiable
federated unlearning, and propose VeriFi, a unified framework integrating
federated unlearning and verification that allows systematic analysis of the
unlearning and quantification of its effect, with different combinations of
multiple unlearning and verification methods. In VeriFi, the leaving
participant is granted the right to verify (RTV), that is, the participant
notifies the server before leaving, then actively verifies the unlearning
effect in the next few communication rounds. The unlearning is done at the
server side immediately after receiving the leaving notification, while the
verification is done locally by the leaving participant via two steps: marking
(injecting carefully-designed markers to fingerprint the leaver) and checking
(examining the change of the global model's performance on the markers). Based
on VeriFi, we conduct the first systematic and large-scale study for verifiable
federated unlearning, considering 7 unlearning methods and 5 verification
methods. Particularly, we propose a more efficient and FL-friendly unlearning
method, and two more effective and robust non-invasive-verification methods. We
extensively evaluate VeriFi on 7 datasets and 4 types of deep learning models.
Our analysis establishes important empirical understandings for more
trustworthy federated unlearning.
Related papers
- SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Federated Unlearning: How to Efficiently Erase a Client in FL? [9.346673106489742]
We propose a method to erase a client by removing the influence of their entire local data from the trained global model.
Our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch.
Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.
arXiv Detail & Related papers (2022-07-12T13:24:23Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.