Decentralized Federated Unlearning on Blockchain
- URL: http://arxiv.org/abs/2402.16294v1
- Date: Mon, 26 Feb 2024 04:31:53 GMT
- Title: Decentralized Federated Unlearning on Blockchain
- Authors: Xiao Liu, Mingyuan Li, Xu Wang, Guangsheng Yu, Wei Ni, Lixiang Li,
Haipeng Peng, Renping Liu
- Abstract summary: Federateded Learning (FL) has been gaining traction for ensuring the integrity and traceability of FL processes.
We propose BlockFUL, a generic framework that redesigns the blockchain structure using Chameleon Hash (CH) technology.
We conduct a comprehensive study of two typical unlearning methods, gradient ascent and re-training, demonstrating the efficient unlearning workflow.
- Score: 27.614497435862766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blockchained Federated Learning (FL) has been gaining traction for ensuring
the integrity and traceability of FL processes. Blockchained FL involves
participants training models locally with their data and subsequently
publishing the models on the blockchain, forming a Directed Acyclic Graph
(DAG)-like inheritance structure that represents the model relationship.
However, this particular DAG-based structure presents challenges in updating
models with sensitive data, due to the complexity and overhead involved. To
address this, we propose Blockchained Federated Unlearning (BlockFUL), a
generic framework that redesigns the blockchain structure using Chameleon Hash
(CH) technology to mitigate the complexity of model updating, thereby reducing
the computational and consensus costs of unlearning tasks.Furthermore, BlockFUL
supports various federated unlearning methods, ensuring the integrity and
traceability of model updates, whether conducted in parallel or serial. We
conduct a comprehensive study of two typical unlearning methods, gradient
ascent and re-training, demonstrating the efficient unlearning workflow in
these two categories with minimal CH and block update operations. Additionally,
we compare the computation and communication costs of these methods.
Related papers
- Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - BRFL: A Blockchain-based Byzantine-Robust Federated Learning Model [8.19957400564017]
Federated learning, which stores data in distributed nodes and shares only model parameters, has gained significant attention for addressing this concern.
A challenge arises in federated learning due to the Byzantine Attack Problem, where malicious local models can compromise the global model's performance during aggregation.
This article proposes the integration of Byzantine-Robust Federated Learning (BRLF) model that combines federated learning with blockchain technology.
arXiv Detail & Related papers (2023-10-20T10:21:50Z) - A Blockchain-empowered Multi-Aggregator Federated Learning Architecture
in Edge Computing with Deep Reinforcement Learning Optimization [8.082460100928358]
Federated learning (FL) is emerging as a sought-after distributed machine learning architecture.
With advancements in network infrastructure, FL has been seamlessly integrated into edge computing.
While blockchain technology promises to bolster security, practical deployment on resource-constrained edge devices remains a challenge.
arXiv Detail & Related papers (2023-10-14T20:47:30Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Blockchain-based Monitoring for Poison Attack Detection in Decentralized
Federated Learning [2.322461721824713]
Federated Learning (FL) is a machine learning technique that addresses the privacy challenges in terms of access rights of local datasets.
In decentralized FL, the chief is eliminated from the learning process as workers collaborate between each other to train the global model.
We propose a technique which consists in decoupling the monitoring phase from the detection phase in defenses against poisoning attacks.
arXiv Detail & Related papers (2022-09-30T19:07:29Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Secure and Efficient Federated Learning Through Layering and Sharding
Blockchain [15.197940168865271]
This paper proposes ChainFL, a novel two-layer blockchain-driven Federated Learning system.
It splits the Internet network into multiple shards within the subchain layer, effectively reducing the scale of information exchange.
It also employs a Direct Acyclic Graph (DAG)-based mainchain as the mainchain layer, enabling parallel and asynchronous cross-shard validation.
arXiv Detail & Related papers (2021-04-27T12:19:07Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.