Blockchain-enabled Trustworthy Federated Unlearning
- URL: http://arxiv.org/abs/2401.15917v1
- Date: Mon, 29 Jan 2024 07:04:48 GMT
- Title: Blockchain-enabled Trustworthy Federated Unlearning
- Authors: Yijing Lin, Zhipeng Gao, Hongyang Du, Jinke Ren, Zhiqiang Xie, Dusit
Niyato
- Abstract summary: Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
- Score: 50.01101423318312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated unlearning is a promising paradigm for protecting the data
ownership of distributed clients. It allows central servers to remove
historical data effects within the machine learning model as well as address
the "right to be forgotten" issue in federated learning. However, existing
works require central servers to retain the historical model parameters from
distributed clients, such that allows the central server to utilize these
parameters for further training even, after the clients exit the training
process. To address this issue, this paper proposes a new blockchain-enabled
trustworthy federated unlearning framework. We first design a proof of
federated unlearning protocol, which utilizes the Chameleon hash function to
verify data removal and eliminate the data contributions stored in other
clients' models. Then, an adaptive contribution-based retraining mechanism is
developed to reduce the computational overhead and significantly improve the
training efficiency. Extensive experiments demonstrate that the proposed
framework can achieve a better data removal effect than the state-of-the-art
frameworks, marking a significant stride towards trustworthy federated
unlearning.
Related papers
- ConDa: Fast Federated Unlearning with Contribution Dampening [46.074452659791575]
ConDa is a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client.
We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data.
arXiv Detail & Related papers (2024-10-05T12:45:35Z) - Safely Learning with Private Data: A Federated Learning Framework for Large Language Model [3.1077263218029105]
Federated learning (FL) is an ideal solution for training models with distributed private data.
Traditional frameworks like FedAvg are unsuitable for large language models (LLM)
We propose FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks.
arXiv Detail & Related papers (2024-06-21T06:43:15Z) - Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Scalable Federated Unlearning via Isolated and Coded Sharding [76.12847512410767]
Federated unlearning has emerged as a promising paradigm to erase the client-level data effect.
This paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing.
arXiv Detail & Related papers (2024-01-29T08:41:45Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Unlearning: How to Efficiently Erase a Client in FL? [9.346673106489742]
We propose a method to erase a client by removing the influence of their entire local data from the trained global model.
Our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch.
Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.
arXiv Detail & Related papers (2022-07-12T13:24:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Aggregation Service for Federated Learning: An Efficient, Secure, and
More Resilient Realization [22.61730495802799]
We present a system design which offers efficient protection of individual model updates throughout the learning procedure.
Our system achieves accuracy comparable to the baseline, with practical performance.
arXiv Detail & Related papers (2022-02-04T05:03:46Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.