BlockFLow: An Accountable and Privacy-Preserving Solution for Federated
Learning
- URL: http://arxiv.org/abs/2007.03856v1
- Date: Wed, 8 Jul 2020 02:24:26 GMT
- Title: BlockFLow: An Accountable and Privacy-Preserving Solution for Federated
Learning
- Authors: Vaikkunth Mugunthan, Ravi Rahman and Lalana Kagal
- Abstract summary: BlockFLow is an accountable federated learning system that is fully decentralized and privacy-preserving.
Its primary goal is to reward agents proportional to the quality of their contribution while protecting the privacy of the underlying datasets and being resilient to malicious adversaries.
- Score: 2.0625936401496237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables the development of a machine learning model among
collaborating agents without requiring them to share their underlying data.
However, malicious agents who train on random data, or worse, on datasets with
the result classes inverted, can weaken the combined model. BlockFLow is an
accountable federated learning system that is fully decentralized and
privacy-preserving. Its primary goal is to reward agents proportional to the
quality of their contribution while protecting the privacy of the underlying
datasets and being resilient to malicious adversaries. Specifically, BlockFLow
incorporates differential privacy, introduces a novel auditing mechanism for
model contribution, and uses Ethereum smart contracts to incentivize good
behavior. Unlike existing auditing and accountability methods for federated
learning systems, our system does not require a centralized test dataset,
sharing of datasets between the agents, or one or more trusted auditors; it is
fully decentralized and resilient up to a 50% collusion attack in a malicious
trust model. When run on the public Ethereum blockchain, BlockFLow uses the
results from the audit to reward parties with cryptocurrency based on the
quality of their contribution. We evaluated BlockFLow on two datasets that
offer classification tasks solvable via logistic regression models. Our results
show that the resultant auditing scores reflect the quality of the honest
agents' datasets. Moreover, the scores from dishonest agents are statistically
lower than those from the honest agents. These results, along with the
reasonable blockchain costs, demonstrate the effectiveness of BlockFLow as an
accountable federated learning system.
Related papers
- Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks [8.866045560761528]
Federated learning (FL) is a distributed learning process that allows multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data.
Recent research has demonstrated the effectiveness of inference and poisoning attacks on FL.
We present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks.
arXiv Detail & Related papers (2023-10-03T14:55:30Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLock: Defending Malicious Behaviors in Federated Learning with
Blockchain [3.0111384920731545]
Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models.
We propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized FL system built on blockchain.
arXiv Detail & Related papers (2022-11-05T06:14:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.