An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach
- URL: http://arxiv.org/abs/2009.10269v1
- Date: Tue, 22 Sep 2020 01:50:39 GMT
- Title: An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach
- Authors: Tra Huong Thi Le, Nguyen H. Tran, Yan Kyaw Tun, Minh N. H. Nguyen,
Shashi Raj Pandey, Zhu Han, and Choong Seon Hong
- Abstract summary: Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
- Score: 75.08185720590748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed learning framework that can deal
with the distributed issue in machine learning and still guarantee high
learning performance. However, it is impractical that all users will sacrifice
their resources to join the FL algorithm. This motivates us to study the
incentive mechanism design for FL. In this paper, we consider a FL system that
involves one base station (BS) and multiple mobile users. The mobile users use
their own data to train the local machine learning model, and then send the
trained models to the BS, which generates the initial model, collects local
models and constructs the global model. Then, we formulate the incentive
mechanism between the BS and mobile users as an auction game where the BS is an
auctioneer and the mobile users are the sellers. In the proposed game, each
mobile user submits its bids according to the minimal energy cost that the
mobile users experiences in participating in FL. To decide winners in the
auction and maximize social welfare, we propose the primal-dual greedy auction
mechanism. The proposed mechanism can guarantee three economic properties,
namely, truthfulness, individual rationality and efficiency. Finally, numerical
results are shown to demonstrate the performance effectiveness of our proposed
mechanism.
Related papers
- GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning [29.931169585178818]
Federated learning (FL) is a commonly distributed algorithm for mobile users (MUs) training artificial intelligence (AI) models.
We propose an explainable personalized FL framework, called XPFL. Particularly, in local training, we utilize a generative AI (GAI) model to learn from large unlabeled data.
In global aggregation, we obtain the new local local model by fusing the local and global FL models in specific proportions.
Finally, simulation results validate the effectiveness of the proposed XPFL framework.
arXiv Detail & Related papers (2024-10-11T08:58:05Z) - Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation [31.2017942327673]
This paper proposes a game theoretic incentive mechanism for federated learning (FL) with randomized client participation.
We show that our mechanism achieves higher model performance for the server as well as higher profits for the clients.
arXiv Detail & Related papers (2023-04-17T04:05:57Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - FLAME: Federated Learning Across Multi-device Environments [9.810211000961647]
Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.
We propose FLAME, a user-centered FL training approach to counter statistical and system heterogeneity in multi-device environments.
Our experiment results show that FLAME outperforms various baselines by 4.8-33.8% higher F-1 score, 1.02-2.86x greater energy efficiency, and up to 2.02x speedup in convergence.
arXiv Detail & Related papers (2022-02-17T22:23:56Z) - Incentive Mechanisms for Federated Learning: From Economic and Game
Theoretic Perspective [42.50367925564069]
Federated learning (FL) has shown great potentials in training large-scale machine learning (ML) models without exposing the owners' raw data.
In FL, the data owners can train ML models based on their local data and only send the model updates rather than raw data to the model owner for aggregation.
To improve learning performance in terms of model accuracy and training completion time, it is essential to recruit sufficient participants.
arXiv Detail & Related papers (2021-11-20T07:22:14Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Trading Data For Learning: Incentive Mechanism For On-Device Federated
Learning [25.368195601622688]
Federated Learning rests on the notion of training a global model distributedly on various devices.
Under this setting, users' devices perform computations on their own data and then share the results with the cloud server to update the global model.
The users suffer from privacy leakage of their local data during the federated model training process.
We propose an effective incentive mechanism, which selects users that are most likely to provide reliable data and compensates for their costs of privacy leakage.
arXiv Detail & Related papers (2020-09-11T18:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.