Trading Data For Learning: Incentive Mechanism For On-Device Federated
Learning
- URL: http://arxiv.org/abs/2009.05604v1
- Date: Fri, 11 Sep 2020 18:37:58 GMT
- Title: Trading Data For Learning: Incentive Mechanism For On-Device Federated
Learning
- Authors: Rui Hu, Yanmin Gong
- Abstract summary: Federated Learning rests on the notion of training a global model distributedly on various devices.
Under this setting, users' devices perform computations on their own data and then share the results with the cloud server to update the global model.
The users suffer from privacy leakage of their local data during the federated model training process.
We propose an effective incentive mechanism, which selects users that are most likely to provide reliable data and compensates for their costs of privacy leakage.
- Score: 25.368195601622688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning rests on the notion of training a global model
distributedly on various devices. Under this setting, users' devices perform
computations on their own data and then share the results with the cloud server
to update the global model. A fundamental issue in such systems is to
effectively incentivize user participation. The users suffer from privacy
leakage of their local data during the federated model training process.
Without well-designed incentives, self-interested users will be unwilling to
participate in federated learning tasks and contribute their private data. To
bridge this gap, in this paper, we adopt the game theory to design an effective
incentive mechanism, which selects users that are most likely to provide
reliable data and compensates for their costs of privacy leakage. We formulate
our problem as a two-stage Stackelberg game and solve the game's equilibrium.
Effectiveness of the proposed mechanism is demonstrated by extensive
simulations.
Related papers
- Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Efficient Core-selecting Incentive Mechanism for Data Sharing in
Federated Learning [0.12289361708127873]
Federated learning is a distributed machine learning system that uses participants' data to train an improved global model.
How to establish an incentive mechanism that both incentivizes inputting data truthfully and promotes stable cooperation has become an important issue to consider.
We propose an efficient core-selecting mechanism based on sampling approximation that only aggregates models on sampled coalitions to approximate the exact result.
arXiv Detail & Related papers (2023-09-21T01:47:39Z) - Evaluating and Incentivizing Diverse Data Contributions in Collaborative
Learning [89.21177894013225]
For a federated learning model to perform well, it is crucial to have a diverse and representative dataset.
We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium.
We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population.
arXiv Detail & Related papers (2023-06-08T23:38:25Z) - Mechanisms that Incentivize Data Sharing in Federated Learning [90.74337749137432]
We show how a naive scheme leads to catastrophic levels of free-riding where the benefits of data sharing are completely eroded.
We then introduce accuracy shaping based mechanisms to maximize the amount of data generated by each agent.
arXiv Detail & Related papers (2022-07-10T22:36:52Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Incentivizing Federated Learning [2.420324724613074]
This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
arXiv Detail & Related papers (2022-05-22T23:02:43Z) - Comparative assessment of federated and centralized machine learning [0.0]
Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices.
In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data.
We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large.
arXiv Detail & Related papers (2022-02-03T11:20:47Z) - An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach [75.08185720590748]
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
arXiv Detail & Related papers (2020-09-22T01:50:39Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Incentives for Federated Learning: a Hypothesis Elicitation Approach [10.452709936265274]
Federated learning provides a promising paradigm for collecting machine learning models from distributed data sources.
The success of a credible federated learning system builds on the assumption that the decentralized and self-interested users will be willing to participate.
This paper introduces solutions to incentivize truthful reporting of a local, user-side machine learning model.
arXiv Detail & Related papers (2020-07-21T04:55:31Z) - Leveraging Semi-Supervised Learning for Fairness using Neural Networks [49.604038072384995]
There has been a growing concern about the fairness of decision-making systems based on machine learning.
In this paper, we propose a semi-supervised algorithm using neural networks benefiting from unlabeled data.
The proposed model, called SSFair, exploits the information in the unlabeled data to mitigate the bias in the training data.
arXiv Detail & Related papers (2019-12-31T09:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.