Incentive Mechanisms for Federated Learning: From Economic and Game
Theoretic Perspective
- URL: http://arxiv.org/abs/2111.11850v1
- Date: Sat, 20 Nov 2021 07:22:14 GMT
- Title: Incentive Mechanisms for Federated Learning: From Economic and Game
Theoretic Perspective
- Authors: Xuezhen Tu, Kun Zhu, Nguyen Cong Luong, Dusit Niyato, Yang Zhang, and
Juan Li
- Abstract summary: Federated learning (FL) has shown great potentials in training large-scale machine learning (ML) models without exposing the owners' raw data.
In FL, the data owners can train ML models based on their local data and only send the model updates rather than raw data to the model owner for aggregation.
To improve learning performance in terms of model accuracy and training completion time, it is essential to recruit sufficient participants.
- Score: 42.50367925564069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) becomes popular and has shown great potentials in
training large-scale machine learning (ML) models without exposing the owners'
raw data. In FL, the data owners can train ML models based on their local data
and only send the model updates rather than raw data to the model owner for
aggregation. To improve learning performance in terms of model accuracy and
training completion time, it is essential to recruit sufficient participants.
Meanwhile, the data owners are rational and may be unwilling to participate in
the collaborative learning process due to the resource consumption. To address
the issues, there have been various works recently proposed to motivate the
data owners to contribute their resources. In this paper, we provide a
comprehensive review for the economic and game theoretic approaches proposed in
the literature to design various schemes for stimulating data owners to
participate in FL training process. In particular, we first present the
fundamentals and background of FL, economic theories commonly used in incentive
mechanism design. Then, we review applications of game theory and economic
approaches applied for incentive mechanisms design of FL. Finally, we highlight
some open issues and future research directions concerning incentive mechanism
design of FL.
Related papers
- Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics [2.7456900944642686]
Federated unlearning (FU) algorithms efficiently remove clients' contributions without full model retraining.
This article provides background concepts, empirical evidence, and practical guidelines to design/implement efficient FU schemes.
arXiv Detail & Related papers (2024-01-10T13:26:19Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Towards Interpretable Federated Learning [19.764172768506132]
Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
It is important to balance the need for performance, privacy-preservation and interpretability, especially in mission critical applications such as finance and healthcare.
We conduct comprehensive analysis of the representative IFL approaches, the commonly adopted performance evaluation metrics, and promising directions towards building versatile IFL techniques.
arXiv Detail & Related papers (2023-02-27T02:06:18Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach [75.08185720590748]
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
arXiv Detail & Related papers (2020-09-22T01:50:39Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.