PPBFL: A Privacy Protected Blockchain-based Federated Learning Model
- URL: http://arxiv.org/abs/2401.01204v2
- Date: Mon, 8 Jan 2024 15:38:22 GMT
- Title: PPBFL: A Privacy Protected Blockchain-based Federated Learning Model
- Authors: Yang Li, Chunhe Xia, Wanshuang Lin, Tianbo Wang
- Abstract summary: We propose a Protected-based Federated Learning Model (PPBFL) to enhance the security of federated learning.
We introduce a Proof of Training Work (PoTW) algorithm tailored for federated learning, aiming to incentive training nodes.
We also propose a new mix transactions mechanism utilizing ring signature technology to better protect the identity privacy of local training clients.
- Score: 6.278098707317501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of machine learning and a growing concern for data
privacy, federated learning has become a focal point of attention. However,
attacks on model parameters and a lack of incentive mechanisms hinder the
effectiveness of federated learning. Therefore, we propose A Privacy Protected
Blockchain-based Federated Learning Model (PPBFL) to enhance the security of
federated learning and encourage active participation of nodes in model
training. Blockchain technology ensures the integrity of model parameters
stored in the InterPlanetary File System (IPFS), providing protection against
tampering. Within the blockchain, we introduce a Proof of Training Work (PoTW)
consensus algorithm tailored for federated learning, aiming to incentive
training nodes. This algorithm rewards nodes with greater computational power,
promoting increased participation and effort in the federated learning process.
A novel adaptive differential privacy algorithm is simultaneously applied to
local and global models. This safeguards the privacy of local data at training
clients, preventing malicious nodes from launching inference attacks.
Additionally, it enhances the security of the global model, preventing
potential security degradation resulting from the combination of numerous local
models. The possibility of security degradation is derived from the composition
theorem. By introducing reverse noise in the global model, a zero-bias estimate
of differential privacy noise between local and global models is achieved.
Furthermore, we propose a new mix transactions mechanism utilizing ring
signature technology to better protect the identity privacy of local training
clients. Security analysis and experimental results demonstrate that PPBFL,
compared to baseline methods, not only exhibits superior model performance but
also achieves higher security.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - B^2SFL: A Bi-level Blockchained Architecture for Secure Federated
Learning-based Traffic Prediction [4.3030251749726345]
Federated Learning (FL) is a privacy-preserving machine learning technology.
Security and privacy guarantees could be compromised due to malicious participants and the centralized FL server.
This article proposed a bi-level blockchained architecture for secure federated learning-based traffic prediction.
arXiv Detail & Related papers (2023-10-23T08:06:05Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z) - A Blockchain-based Decentralized Federated Learning Framework with
Committee Consensus [20.787163387487816]
In mobile computing scenarios, federated learning protects users from exposing their private data, while cooperatively training the global model for a variety of real-world applications.
Security of federated learning is increasingly being questioned, due to the malicious clients or central servers' constant attack to the global model or user privacy data.
We propose a decentralized federated learning framework based on blockchain, i.e., a Committee consensus (BFLC) framework.
arXiv Detail & Related papers (2020-04-02T02:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.