Transparent Contribution Evaluation for Secure Federated Learning on
Blockchain
- URL: http://arxiv.org/abs/2101.10572v1
- Date: Tue, 26 Jan 2021 05:49:59 GMT
- Title: Transparent Contribution Evaluation for Secure Federated Learning on
Blockchain
- Authors: Shuaicheng Ma, Yang Cao, Li Xiong
- Abstract summary: We propose a blockchain-based federated learning framework and a protocol to transparently evaluate each participants' contribution.
Our framework protects all parties' privacy in the model building phrase and transparently evaluates contributions based on the model updates.
- Score: 10.920274650337559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning is a promising machine learning paradigm when multiple
parties collaborate to build a high-quality machine learning model.
Nonetheless, these parties are only willing to participate when given enough
incentives, such as a fair reward based on their contributions. Many studies
explored Shapley value based methods to evaluate each party's contribution to
the learned model. However, they commonly assume a trusted server to train the
model and evaluate the data owners' model contributions, which lacks
transparency and may hinder the success of federated learning in practice. In
this work, we propose a blockchain-based federated learning framework and a
protocol to transparently evaluate each participants' contribution. Our
framework protects all parties' privacy in the model building phrase and
transparently evaluates contributions based on the model updates. The
experiment with the handwritten digits dataset demonstrates that the proposed
method can effectively evaluate the contributions.
Related papers
- DPVS-Shapley:Faster and Universal Contribution Evaluation Component in Federated Learning [1.740992908651449]
We introduce a component called Dynamic Pruning Validation Set Shapley (DPVS-Shapley)
This method accelerates the contribution assessment process by dynamically pruning the original dataset without compromising the evaluation's accuracy.
arXiv Detail & Related papers (2024-10-19T13:01:44Z) - Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - Mitigating federated learning contribution allocation instability through randomized aggregation [1.827018440608344]
Federated learning (FL) is a novel collaborative machine learning framework designed to preserve privacy while enabling the creation of robust models.
This paper investigates the fair and accurate attribution of contributions from various participants to the creation of the joint global model.
We introduce FedRandom, which is designed to sample contributions in a more equitable and distributed manner.
arXiv Detail & Related papers (2024-05-13T13:55:34Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Practical Vertical Federated Learning with Unsupervised Representation
Learning [47.77625754666018]
Federated learning enables multiple parties to collaboratively train a machine learning model without sharing their raw data.
We propose a novel communication-efficient vertical federated learning algorithm named FedOnce, which requires only one-shot communication among parties.
Our privacy-preserving technique significantly outperforms the state-of-the-art approaches under the same privacy budget.
arXiv Detail & Related papers (2022-08-13T08:41:32Z) - VeriFi: Towards Verifiable Federated Unlearning [59.169431326438676]
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
arXiv Detail & Related papers (2022-05-25T12:20:02Z) - Blockchain-based Trustworthy Federated Learning Architecture [16.062545221270337]
We present a blockchain-based trustworthy federated learning architecture.
We first design a smart contract-based data-model provenance registry to enable accountability.
We also propose a weighted fair data sampler algorithm to enhance fairness in training data.
arXiv Detail & Related papers (2021-08-16T06:13:58Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - 2CP: Decentralized Protocols to Transparently Evaluate Contributivity in
Blockchain Federated Learning Environments [9.885896204530878]
We introduce 2CP, a framework comprising two novel protocols for Federated Learning.
Crowdsource Protocol allows an actor to bring a model forward for training, and use their own data to evaluate the contributions made to it.
The Consortium Protocol gives trainers the same guarantee even when no party owns the initial model and no dataset is available.
arXiv Detail & Related papers (2020-11-15T12:59:56Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.