Decentralized Federated Unlearning on Blockchain
- URL: http://arxiv.org/abs/2402.16294v1
- Date: Mon, 26 Feb 2024 04:31:53 GMT
- Title: Decentralized Federated Unlearning on Blockchain
- Authors: Xiao Liu, Mingyuan Li, Xu Wang, Guangsheng Yu, Wei Ni, Lixiang Li,
Haipeng Peng, Renping Liu
- Abstract summary: Federateded Learning (FL) has been gaining traction for ensuring the integrity and traceability of FL processes.
We propose BlockFUL, a generic framework that redesigns the blockchain structure using Chameleon Hash (CH) technology.
We conduct a comprehensive study of two typical unlearning methods, gradient ascent and re-training, demonstrating the efficient unlearning workflow.
- Score: 27.614497435862766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blockchained Federated Learning (FL) has been gaining traction for ensuring
the integrity and traceability of FL processes. Blockchained FL involves
participants training models locally with their data and subsequently
publishing the models on the blockchain, forming a Directed Acyclic Graph
(DAG)-like inheritance structure that represents the model relationship.
However, this particular DAG-based structure presents challenges in updating
models with sensitive data, due to the complexity and overhead involved. To
address this, we propose Blockchained Federated Unlearning (BlockFUL), a
generic framework that redesigns the blockchain structure using Chameleon Hash
(CH) technology to mitigate the complexity of model updating, thereby reducing
the computational and consensus costs of unlearning tasks.Furthermore, BlockFUL
supports various federated unlearning methods, ensuring the integrity and
traceability of model updates, whether conducted in parallel or serial. We
conduct a comprehensive study of two typical unlearning methods, gradient
ascent and re-training, demonstrating the efficient unlearning workflow in
these two categories with minimal CH and block update operations. Additionally,
we compare the computation and communication costs of these methods.
Related papers
- Robust Asymmetric Heterogeneous Federated Learning with Corrupted Clients [60.22876915395139]
This paper studies a challenging robust federated learning task with model heterogeneous and data corrupted clients.
Data corruption is unavoidable due to factors such as random noise, compression artifacts, or environmental conditions in real-world deployment.
We propose a novel Robust Asymmetric Heterogeneous Federated Learning framework to address these issues.
arXiv Detail & Related papers (2025-03-12T09:52:04Z) - Blockchain-based Framework for Scalable and Incentivized Federated Learning [0.820828081284034]
Federated Learning (FL) enables collaborative model training without sharing raw data, preserving privacy while harnessing distributed datasets.
Traditional FL systems often rely on centralized aggregating mechanisms, introducing trust issues, single points of failure, and limited mechanisms for incentivizing meaningful client contributions.
This paper presents a blockchain-based FL framework that addresses these limitations by integrating smart contracts and a novel hybrid incentive mechanism.
arXiv Detail & Related papers (2025-02-20T00:38:35Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIP [56.199779065855004]
We introduce CLIPErase, a novel approach that disentangles and selectively forgets both visual and textual associations.
Experiments on the CIFAR-100 and Flickr30K datasets demonstrate that CLIPErase effectively forgets designated associations in zero-shot tasks for multimodal samples.
arXiv Detail & Related papers (2024-10-30T17:51:31Z) - A Blockchain-empowered Multi-Aggregator Federated Learning Architecture
in Edge Computing with Deep Reinforcement Learning Optimization [8.082460100928358]
Federated learning (FL) is emerging as a sought-after distributed machine learning architecture.
With advancements in network infrastructure, FL has been seamlessly integrated into edge computing.
While blockchain technology promises to bolster security, practical deployment on resource-constrained edge devices remains a challenge.
arXiv Detail & Related papers (2023-10-14T20:47:30Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - Multi-dimensional Data Quick Query for Blockchain-based Federated Learning [6.499393722730449]
We propose a novel data structure to improve the query efficiency within each block named MerkleRB-Tree.
In detail, we leverage Minimal Bounding Rectangle(MBR) and bloom-filters for the query process of multi-dimensional continuous-valued attributes and discrete-valued attributes respectively.
arXiv Detail & Related papers (2023-09-27T01:35:11Z) - A Unified Framework for Alternating Offline Model Training and Policy
Learning [62.19209005400561]
In offline model-based reinforcement learning, we learn a dynamic model from historically collected data, and utilize the learned model and fixed datasets for policy learning.
We develop an iterative offline MBRL framework, where we maximize a lower bound of the true expected return.
With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets.
arXiv Detail & Related papers (2022-10-12T04:58:51Z) - Secure and Efficient Federated Learning Through Layering and Sharding
Blockchain [15.197940168865271]
This paper proposes ChainFL, a novel two-layer blockchain-driven Federated Learning system.
It splits the Internet network into multiple shards within the subchain layer, effectively reducing the scale of information exchange.
It also employs a Direct Acyclic Graph (DAG)-based mainchain as the mainchain layer, enabling parallel and asynchronous cross-shard validation.
arXiv Detail & Related papers (2021-04-27T12:19:07Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Adaptive Aggregation Networks for Class-Incremental Learning [102.20140790771265]
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
An inherent problem in CIL is the stability-plasticity dilemma between the learning of old and new classes.
arXiv Detail & Related papers (2020-10-10T18:24:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.