Incentivizing Federated Learning
- URL: http://arxiv.org/abs/2205.10951v1
- Date: Sun, 22 May 2022 23:02:43 GMT
- Title: Incentivizing Federated Learning
- Authors: Shuyu Kong, You Li and Hai Zhou
- Abstract summary: This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
- Score: 2.420324724613074
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning is an emerging distributed collaborative learning paradigm
used by many of applications nowadays. The effectiveness of federated learning
relies on clients' collective efforts and their willingness to contribute local
data. However, due to privacy concerns and the costs of data collection and
model training, clients may not always contribute all the data they possess,
which would negatively affect the performance of the global model. This paper
presents an incentive mechanism that encourages clients to contribute as much
data as they can obtain. Unlike previous incentive mechanisms, our approach
does not monetize data. Instead, we implicitly use model performance as a
reward, i.e., significant contributors are paid off with better models. We
theoretically prove that clients will use as much data as they can possibly
possess to participate in federated learning under certain conditions with our
incentive mechanism
Related papers
- ConDa: Fast Federated Unlearning with Contribution Dampening [46.074452659791575]
ConDa is a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client.
We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data.
arXiv Detail & Related papers (2024-10-05T12:45:35Z) - Efficient Core-selecting Incentive Mechanism for Data Sharing in
Federated Learning [0.12289361708127873]
Federated learning is a distributed machine learning system that uses participants' data to train an improved global model.
How to establish an incentive mechanism that both incentivizes inputting data truthfully and promotes stable cooperation has become an important issue to consider.
We propose an efficient core-selecting mechanism based on sampling approximation that only aggregates models on sampled coalitions to approximate the exact result.
arXiv Detail & Related papers (2023-09-21T01:47:39Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - FedToken: Tokenized Incentives for Data Contribution in Federated
Learning [33.93936816356012]
We propose a contribution-based tokenized incentive scheme, namely textttFedToken, backed by blockchain technology.
We first approximate the contribution of local models during model aggregation, then strategically schedule clients lowering the communication rounds for convergence.
arXiv Detail & Related papers (2022-09-20T14:58:08Z) - Federated Pruning: Improving Neural Network Efficiency with Federated
Learning [24.36174705715827]
We propose Federated Pruning to train a reduced model under the federated setting.
We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
arXiv Detail & Related papers (2022-09-14T00:48:37Z) - Mechanisms that Incentivize Data Sharing in Federated Learning [90.74337749137432]
We show how a naive scheme leads to catastrophic levels of free-riding where the benefits of data sharing are completely eroded.
We then introduce accuracy shaping based mechanisms to maximize the amount of data generated by each agent.
arXiv Detail & Related papers (2022-07-10T22:36:52Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Trading Data For Learning: Incentive Mechanism For On-Device Federated
Learning [25.368195601622688]
Federated Learning rests on the notion of training a global model distributedly on various devices.
Under this setting, users' devices perform computations on their own data and then share the results with the cloud server to update the global model.
The users suffer from privacy leakage of their local data during the federated model training process.
We propose an effective incentive mechanism, which selects users that are most likely to provide reliable data and compensates for their costs of privacy leakage.
arXiv Detail & Related papers (2020-09-11T18:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.