FedToken: Tokenized Incentives for Data Contribution in Federated
Learning
- URL: http://arxiv.org/abs/2209.09775v1
- Date: Tue, 20 Sep 2022 14:58:08 GMT
- Title: FedToken: Tokenized Incentives for Data Contribution in Federated
Learning
- Authors: Shashi Raj Pandey, Lam Duc Nguyen, and Petar Popovski
- Abstract summary: We propose a contribution-based tokenized incentive scheme, namely textttFedToken, backed by blockchain technology.
We first approximate the contribution of local models during model aggregation, then strategically schedule clients lowering the communication rounds for convergence.
- Score: 33.93936816356012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incentives that compensate for the involved costs in the decentralized
training of a Federated Learning (FL) model act as a key stimulus for clients'
long-term participation. However, it is challenging to convince clients for
quality participation in FL due to the absence of: (i) full information on the
client's data quality and properties; (ii) the value of client's data
contributions; and (iii) the trusted mechanism for monetary incentive offers.
This often leads to poor efficiency in training and communication. While
several works focus on strategic incentive designs and client selection to
overcome this problem, there is a major knowledge gap in terms of an overall
design tailored to the foreseen digital economy, including Web 3.0, while
simultaneously meeting the learning objectives. To address this gap, we propose
a contribution-based tokenized incentive scheme, namely \texttt{FedToken},
backed by blockchain technology that ensures fair allocation of tokens amongst
the clients that corresponds to the valuation of their data during model
training. Leveraging the engineered Shapley-based scheme, we first approximate
the contribution of local models during model aggregation, then strategically
schedule clients lowering the communication rounds for convergence and anchor
ways to allocate \emph{affordable} tokens under a constrained monetary budget.
Extensive simulations demonstrate the efficacy of our proposed method.
Related papers
- IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective [1.749935196721634]
Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
arXiv Detail & Related papers (2023-02-17T16:31:19Z) - Federated Learning as a Network Effects Game [32.264180198812745]
Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.
In practice, clients may not benefit from joining in FL, especially in light of potential costs related to issues such as privacy and computation.
We are the first to model clients' behaviors in FL as a network effects game, where each client's benefit depends on other clients who also join the network.
arXiv Detail & Related papers (2023-02-16T19:10:12Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Incentivizing Federated Learning [2.420324724613074]
This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
arXiv Detail & Related papers (2022-05-22T23:02:43Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee [36.07970788489]
Federated Learning is a new paradigm to cope with the privacy issue by allowing clients to perform model training locally.
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.
In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time.
arXiv Detail & Related papers (2020-11-03T15:27:02Z) - LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets [52.60094373289771]
Federated learning is a popular distributed machine learning paradigm with enhanced privacy.
We propose LotteryFL -- a personalized and communication-efficient federated learning framework.
We show that LotteryFL significantly outperforms existing solutions in terms of personalization and communication cost.
arXiv Detail & Related papers (2020-08-07T20:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.